code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Network speed test
# -
# ## Returning to the 20th Century
# + [markdown] slideshow={"slide_type": "notes"}
# Jupyter is all well and good, but sometimes what we want are simple Python scripts and a traditional scientific interface. This where Spyder comes in...
# + [markdown] slideshow={"slide_type": "slide"}
# ## Starting Spyder
#
# Hit the Windows key and start typing "Spyder"
#
# When the snake-on-a-web icon appears, click it
# + [markdown] slideshow={"slide_type": "notes"}
# Spyder has been growing over the last couple years and is quite well set up for scientific Python...
#
# You will see on the left, a large editor window like any other IDE. On the bottom right, is an IPython notebook. This is really just the text version (and origin of) Jupyter. You can use it that same way, modulo interactive graphs and plotting (which you can instead write to a file, say). The top right window provides a handy way of accessing object documentation.
#
# Begin by opening `network-test-client.py`.
# + [markdown] slideshow={"slide_type": "slide"}
# # Virtualenv
# ## Building sandcastles
# + [markdown] slideshow={"slide_type": "notes"}
# Virtualenv provides a means of sandboxing a Python installation, that is, installing a particular version of Python and all the modules you want in a directory, and using everything from there.
# + [markdown] slideshow={"slide_type": "fragment"}
# * Press the Windows key and start typing "Terminal" - when you see a black computer screen icon, click it.
# + [markdown] slideshow={"slide_type": "fragment"}
# * Execute **`cd python-course`** (Return) and then **`pyvenv`**
#
# * If you get a `Command not found` error, execute<br/>**`source setup-venv.sh`**<br/>and try again.
# + [markdown] slideshow={"slide_type": "notes"}
# Put up your arrows when done!
# + [markdown] slideshow={"slide_type": "notes"}
# `venv` is a Python for setting up virtual environments - where you can create an "environment" and install completely separate packages from the system or any other environment. Basically, it's a directory with a load of Python modules
# + [markdown] slideshow={"slide_type": "slide"}
# * In the `python-course` directory execute **`pyvenv env`**
# * This creates a new subdirectory called "env"
# * Execute **`source env/bin/activate`**
# * This loads the new environment
# + [markdown] slideshow={"slide_type": "notes"}
# On Windows, this is basically the same but without `source` and swapping slash direction. Mac is the same.
#
# However, I will recommend a tool called Anaconda, which can vastly simplify Python configuration on Windows and Mac - as I only work with Linux at present, I haven't needed it, but it is highly recommended for those systems. Anaconda is a "distribution" of Python, which bundles a version of the Python interpreter, a whole load of modules and a similar, but slightly different, way of working with environments. Instead of `pip`, it uses a package manager called `conda`. I will try and talk more about that at the next course, but it seems to have vastly simplified working with scientific Python on Windows, especially.
# + [markdown] slideshow={"slide_type": "slide"}
# * Run **`pip3 install sympy`**
# * Notice on the third-last line, it is working inside `env`
# + [markdown] slideshow={"slide_type": "notes"}
# `pip` is Python's home-grown package manager. `pip3` is the Python3 version, which is the one we want in this course. It installs modules from [PyPI](https://pypi.python.org/pypi), the official online repository for modules. It is similar to, say, *packagist* for PHP, *rubygems* for Ruby or *Hackage* for Haskell.
# + [markdown] slideshow={"slide_type": "fragment"}
# * When you want to return to the normal system Python, you can execute **`deactivate`**
# + [markdown] slideshow={"slide_type": "notes"}
# For the moment, we will keep going with the system tools, but if you want to play around with packages, without mucking up your whole installation, this is the recommended way.
# + [markdown] slideshow={"slide_type": "notes"}
# Since most of you are computer scientists, and I'm not, I am going to to take a risk and theme this session on measuring latency between a server and client. Please ignore the simplistic methodology - if you find a few minutes spare, feel free to improve the approach! However, the aim here is to show you some tools, so don't worry too much about the numbers
# + [markdown] slideshow={"slide_type": "slide"}
# # Latency
# ## Time tracking
# + [markdown] slideshow={"slide_type": "fragment"}
# Return to Spyder and open up **`network_test_client.py`**
# + [markdown] slideshow={"slide_type": "notes"}
# Nothing here should be too shocking, except the fact that I'm using my server for this test. Please keep testing values low, bearing in mind we're multiplying several loops together.
#
# So far, this code simply sets up a few parameters and the logger, as we saw earlier. In the gap near the bottom we will put some code to calculate a variable `return_time`, which will be the average delay (from send to recv) over `repeats` number of 1K bounces off a server, (on a single socket connection).
# + [markdown] slideshow={"slide_type": "fragment"}
# Follow along typing with me or, if you find it hard to concentrate, skip to **`network-test-client-partial-1.py`**
# + [markdown] slideshow={"slide_type": "notes"}
# Now, I will start a function that we wish to time. Anyone who feels I am polluting the purity of the timing function by adding extra calls is welcome to improve their version!
# + slideshow={"slide_type": "slide"}
# Define our actual measured operation
def round_trip(skt):
# Create a random message to test our connection
payload = os.urandom(1024)
# Network-limited part
skt.sendall(payload)
received_payload = skt.recv(1024)
if received_payload != payload:
raise IOError("We received an incorrect echo")
# + [markdown] slideshow={"slide_type": "notes"}
# We define this as a function that takes a TCP socket. It creates 1024 bytes of random data (this is the crypto-quality generator - overkill but useful for you to be aware of). We send our payload off down the wire, and expect to see it arrive here. Any exceptions will get thrown straight through for simplicity, but we will catch them later.
#
# We even throw our own exception, a common superclass for most IO exceptions, `IOError` when there is a problem with the payload. It is maybe helpful to subclass IOError (or a more appropriate error class) to create more specific exceptions - [see this link for a tutorial](https://docs.python.org/3/tutorial/errors.html#tut-userexceptions). Remember that the decision in an `except` statement, whether to handle an exception or pass it on it, is based on the class. Creating your own allows you to grab it and it only in a `try-except`.
# + slideshow={"slide_type": "slide"}
# Use a `with` context to make sure the socket automatically
# gets cleaned up
with socket.create_connection(address=(host, port), timeout=timeout) as skt:
logger.info("Created connection")
logger.info("Completed trial")
# + [markdown] slideshow={"slide_type": "notes"}
# We start writing this after our function. This is part of the main flow. I should mention, a more common pattern in Python, even in scripts, is to have virtually no global scope code like this. Instead, you would create a `main` or `run` function, like many compiled languages, and your only top-level call would be to run it. If you're feeling even more adventurous, you would wrap this in a class and create an application object, then call its `run` method, say.
#
# To create the connection, we use the `socket` library convenience function `create_connection`. This saves a few lines, creating, listening and binding, but, for those of you who care about your network code, you can do those explicitly very easily. We have also used three of our parameters.
#
# Note that `with` has made another appearance - anything with a so-called *context manager* methods works with a `with` statement. Here, the `socket` is guaranteed to be closed on exit, even if we leave via an exception. We name it `skt`.
# + slideshow={"slide_type": "slide"}
try:
with socket.create_connection(address=(host, port), timeout=timeout) as skt:
logger.info("Created connection")
# TESTING CODE HERE
logger.info("Completed trial")
except OSError as e:
logger.error(
"We could not create a socket connection to the "
"remote echo server"
)
raise e
# + [markdown] slideshow={"slide_type": "notes"}
# I mentioned earlier that `try-except` is your friend. Bear in mind, if you highlight several lines in Spyder, Tab will indent them - Shift+Tab deindents.
#
# If we get a socket error from anywhere inside the `with` (including our test routine from earlier), it will get caught here. In practice this is only being used to inject an extra logging line, but it illustrates the point. Note that we aren't catching the `IOError` I mentioned earlier. To do so, you can add an extra except clause, or turn "`OSError as e`" into "`(IOError, OSError) as e`", if we are happy to use the same one. Note that you do need the tuple parens.
# + slideshow={"slide_type": "slide"}
logger.info("Created connection")
# This is going to add a bit of misleading overhead, but for this
# purpose we'll use lambda for simplicity
return_time = timeit.timeit(
lambda: round_trip(skt),
number=repeats
)
logger.info("Completed trial")
# + [markdown] slideshow={"slide_type": "notes"}
# Finally, we make the call that will run our function. We use a module called `timeit` for this purpose. Again, this follows the Python theme of "don't roll your own, when experts have done it for you". If you're that ambitious, it is better to improve their code if you can than start from scratch, everybody wins. `timeit` is a core module and, supposedly, avoids a number of common function timing pitfalls.
#
# To use it, you supply a routine to test timings for as the first argument, and the number of repeats as the second. Please keep that `number` argument in there, as the default is 100,000 and I would rather you all didn't hit my server with 100M of socket traffic at the same time.
#
# Another feature of Python has been slipped in there at the same time. You can see a `lambda` function. This is a very simple construct of the form:
# + [markdown] slideshow={"slide_type": "subslide"}
# ```python
# lambda arg1, arg2: statement_using(arg1 + arg2)
# ```
# + [markdown] slideshow={"slide_type": "notes"}
# It is equivalent to
# + [markdown] slideshow={"slide_type": "fragment"}
# ```python
# def func(arg1, arg2):
# return statement_using(arg1 + arg2)
# ```
# + [markdown] slideshow={"slide_type": "notes"}
# Basically, anywhere we would use "func", passing it as a callback or whatever, we can swap our anonymous lambda function. In this particular case, `timeit` will always call the function in the first argument with no arguments, but we need to pass the socket to our routine. How do we solve this? By creating a function that `timeit` can call with no arguments, but that forwards the call on to `round_trip` with the `skt` variable shoehorned in.
#
# Why don't we just name a function and forget about lambda? It is subjective, but here it is likely to confuse our code - what we need is a single line function that gets called with no arguments and calls `round_trip` with one. If we add another routine called `round_trip_caller` or something like that, on first glance we will wonder where it is being used and why, it doubles the number of `def` blocks in our code, and adds a couple of extra source lines that don't really clarify anything that a good comment wouldn't fix.
# + [markdown] slideshow={"slide_type": "slide"}
# Hit F5, or go to `Run->Run`
#
# If you get a dialog saying "`Run Settings`", choose "`Execute in a new dedicated Python console`" and continue.
# + [markdown] slideshow={"slide_type": "notes"}
# You should now see some text appearing on the lower right hand pane. It shows the output of your code. You could also run this script from the command line with "`python3 network-test-client-complete.py`".
#
# The final output line should show the average time taken, and be somewhere in the 100ths of a second. Well that's fine, but suppose we want to see if that bears up under simultaneous calls, instead of just consecutive ones. This is where threading comes in.
# + [markdown] slideshow={"slide_type": "slide"}
# # Threading our way
# ## Weaving Python
# + [markdown] slideshow={"slide_type": "notes"}
# Python makes threading straightforward, (with a couple of caveats). We will start with a short example and expand it using the code we have already written.
# + [markdown] slideshow={"slide_type": "fragment"}
# Open up "`network-test-client-2.py`"
# + [markdown] slideshow={"slide_type": "notes"}
# We have already imported the `threading` module. Now we need some threads...
# + slideshow={"slide_type": "slide"}
# Threads are given identifiers as integers 1-N
# This is easier to handle in numpy
threads = [threading.Thread(target=run) for i in range(thread_count)]
# The items method turns a module into pairs (tuples) of key and value
for thread in threads:
thread.start()
# Wait for all threads to complete
for thread in threads:
thread.join()
# + [markdown] slideshow={"slide_type": "notes"}
# First we introduce the concept of a *generator*. This is a Python construct that embeds a loop in quite a few possible places. Mostly you will see them used to create lists and dicts. The format is (for basic use):
# + [markdown] slideshow={"slide_type": "fragment"}
# ```python
# [fn(x) for x in iterable] --- e.g. [s.upper() for s in strings]
# ```
# + [markdown] slideshow={"slide_type": "notes"}
# This takes each item in an iterable, such as list, and does something to it to provide each entry in a new list. The example given goes through a list of, say, strings and uppercases all of them. The whole thing is another (equal size) list. In our case, we are using a bit of short-hand to say we want "`thread_count`" threads, each element in the list being a new instance of the `threading.Thread` class.
# + [markdown] slideshow={"slide_type": "notes"}
# We then start each thread, so a new parallel run of the `run` routine heads off. For completeness, we make sure every thread has finished before we reach the final `logging` statement. The `join` method blocks returning until that thread has completed.
# + slideshow={"slide_type": "slide"}
def run():
# Get the current thread
currentThread = threading.currentThread()
# Send out a message
logging.info("Hi, my name is {name}".format(
name=currentThread.getName()
))
# + [markdown] slideshow={"slide_type": "notes"}
# Here we add some simple text to the `run` routine. `threading` provides a handy `currentThread` function that each thread can call to get itself. When it does, we can get a name for it. We use a new method on the string class here: `format`. This is allows us to name fields in the string and is actually the recommended approach. The values themselves are passed as named arguments to `format`.
#
# Try running again - you should get a "Hello" from every thread.
# + [markdown] slideshow={"slide_type": "notes"}
# Unfortunately, we cannot rely on thread names to be unique and, while all threads get a unique ID, it's horrendously un-user-friendly. As such we will name our threads from 1 up to `thread_count`
# + slideshow={"slide_type": "slide"}
# Generate some simple numeric way to refer to these
thread_indices = range(1, thread_count + 1)
threads = {i: threading.Thread(target=run) for i in thread_indices}
# The items method turns a module into pairs (tuples) of key and value
for idx, thread in threads.items():
thread.index = idx
thread.start()
# Wait for all threads to complete
for thread in threads.values():
thread.join()
# + [markdown] slideshow={"slide_type": "notes"}
# First of all, we change our threads list to a dict - that makes more conceptual sense if we are naming them. Our list of names are just the integers from 1 to `thread_count`, so we can use `range`. You can see on the second line a variation of the generator notation that we saw a minute ago applied to dicts. The only change is that we now supply a key and a colon before the value. This produces a dict like any other mapping `thread_count` integers to `thread_count` new threads.
#
# To loop, we introduce a couple of methods - `dict.items`, which returns a key-value tuple (pair) for each element - and `dict.values`, which returns all of the values, with no keys. As you can imagine, there is also `dict.keys`, (in fact, if you use the dict itself as the loop iterable, you will get only the keys).
#
# We cheekily slip in a dynamic modification to the thread object. This isn't extremely bad, but it's not the tidiest way of passing information - we don't know for sure that threading.Thread or its superclasses have no `index` member, for instance. However, it does highlight the fact that objects in Python, by default, can have members added on the fly.
# + slideshow={"slide_type": "slide"}
def run():
currentThread = threading.currentThread()
# Send out a message
logging.info("Hi, my name is {name} and my index is {index}".format(
name=currentThread.getName(),
index=currentThread.index
))
# + [markdown] slideshow={"slide_type": "notes"}
# Now we can have updated with a minor extension of "`run`". Try executing the code - you should now get a unique number fom 1 to `thread_count` from each thread. Check your code matches `network-test-client2-partial2.py`
# + [markdown] slideshow={"slide_type": "slide"}
# # Challenge
# ## Stars up!
#
# Combine our first code into the `run` function to produce a script that tests 10 times (timeit arg) from each of 10 threads.
#
# Use a global list `result` to store the `average_return_time` for each thread as a tuple `(index, average_return_time)`.
# + [markdown] slideshow={"slide_type": "notes"}
# Don't worry about atomic operations for the moment.
# + [markdown] slideshow={"slide_type": "slide"}
# # Combined Code
#
# Compare with `network-test-client2-complete.py`
# + slideshow={"slide_type": "-"}
results = []
lock = threading.Lock()
thread_indices = range(1, thread_count + 1)
...
# + [markdown] slideshow={"slide_type": "notes"}
# This is the approach I used. I told you not to worry about atomic operations (Python, for built-ins, actually looks after this itself). However, to illustrate, I have created a lock object, along with the `result` list, just before the `thread_indices` assignment
# + slideshow={"slide_type": "slide"}
# sockets over-arching try-catch here
# Strictly, a lock isn't required for accessing a dict, but this is an
# opportunity to demonstrate the use of locks
with lock:
results.append((currentThread.index, average_return_time))
logger.info("Average time taken: {delay} s".format(delay=average_return_time))
# + [markdown] slideshow={"slide_type": "notes"}
# Inside the `run`, we have the original try-except. At the end of it, I have updated `results` with the pair I described. This is the integer index and the average return time. This shows the diversity of `with` - here it succinctly grabs and releases the lock before accessing the global results. Clear and concise.
# + [markdown] slideshow={"slide_type": "notes"}
# Now... how do we analyse this?
# +
# Now we switch our results (2xthread_count) list to a numpy structure
data = np.array(results)
# However, we are likely to want to play around with the statistics, in
# Jupyter or elsewhere, so we save them...
np.save(output_filename, data)
# -
# Right at the end, we have this. Rather than running our code for every analysis, we dump out a numpy object that we can read in in a separate script. And so you have it! Running `network-test-client2-complete.py` (or updating your own code) will output a file with this data.
# + [markdown] slideshow={"slide_type": "slide"}
# # The Ghost of Coding Future
# ## Styling for future you
# + [markdown] slideshow={"slide_type": "notes"}
# Only recently, when having to work with code written by new Pythoners, and my appeals for *some* code style have fallen on deaf ears, have I realised how worthwhile emphasizing this at the start is. Not that it was worse than any other language, and as you're mostly computer scientists rather than physical scientists, decent style is just status quo.
#
# However, half the point of Python is that this *should not* be a problem in Python, and so Python-style is part of learning the language. I know for a fact, my Python was pretty ropey when I started out, and I paid the price when I went back six months later to edit some of it. However, I was simultaneously editing my six month old mathematician C++, so even with the ropey Python, I was sold on the benefits.
# + [markdown] slideshow={"slide_type": "slide"}
# To begin, I am going to give you a group 5 minute challenge:
#
# * rewrite the code in `bad-python.py` to print, when `newversion` is `True`:
#
# ```
# 1:PRINTING VALUES
#
# 2:0.84 is sin(x), 0.54 is cos(x)
# 3:0.91 is sin(x), -0.42 is cos(x)
# 4:0.14 is sin(x), -0.99 is cos(x)
# 5:-0.76 is sin(x), -0.65 is cos(x)
# 6:-0.96 is sin(x), 0.28 is cos(x)
# 7:-0.28 is sin(x), 0.96 is cos(x)
# 8:0.66 is sin(x), 0.75 is cos(x)
# 9:0.99 is sin(x), -0.15 is cos(x)
# 10:0.41 is sin(x), -0.91 is cos(x)
# ```
# * return to original functionality when `newversion` is `False`
# + [markdown] slideshow={"slide_type": "notes"}
# There are a few new features snuck in there - use Google and ask on Etherpad to find out about them. If you have any ideas or hints, put them into Etherpad - exchange ideas! And put up your stars!
#
# This isn't just me being irritating - this is the kind of code accretion that can happen with shortcuts to include a feature - writing code rather than using libraries, taking the first solution rather than looking for a Pythonic one... it's not hard to end up with this sort of thing...
# + [markdown] slideshow={"slide_type": "slide"}
# # The Revelation
#
# "*Scrooge hung his head to hear his own words quoted by the Spirit, and was overcome with penitence and grief.*"<br/> ~ A Christmas Carol, Ch. Dickens
#
# (also me after dealing with past-me's code)
# -
# We will try this now with slightly more readable code. Still not unimpeachable, and there's a few niceties left out for simplicity in this lesson, so don't take it as perfection!
# + [markdown] slideshow={"slide_type": "slide"}
# Now try it with better code:
#
# * rewrite the code in `better-python.py` to print, when `newversion` is `True`:
#
# ```
# 1:PRINTING VALUES OF SIN AND COS FOR x IN 1, 2,..., 9
#
# 2:0.84 is sin(x), 0.54 is cos(x)
# 3:0.91 is sin(x), -0.42 is cos(x)
# 4:0.14 is sin(x), -0.99 is cos(x)
# 5:-0.76 is sin(x), -0.65 is cos(x)
# 6:-0.96 is sin(x), 0.28 is cos(x)
# 7:-0.28 is sin(x), 0.96 is cos(x)
# 8:0.66 is sin(x), 0.75 is cos(x)
# 9:0.99 is sin(x), -0.15 is cos(x)
# 10:0.41 is sin(x), -0.91 is cos(x)
# ```
# * return to original functionality when `newversion` is `False`
# + [markdown] slideshow={"slide_type": "notes"}
# TIP 1: [zip](https://docs.python.org/3/library/functions.html#zip) pairs up items in equal length lists/arrays/etc. and turns them into a series of tuples
#
# TIP 2: the `in` operator can assign N-length tuples on the right to N comma separated variables on the left, e.g.
# ```python
# x = [(1, 3, 1), (4, 2, 9), (1, 0, 10), (9, 18, 1)]
# for a, b, c in x:
# print("A+B/C =", a + b / c)
# ```
# -
# # Final Challenge - Ease of extension
# ## It is easier to extend good code than PhD funding
# + [markdown] slideshow={"slide_type": "notes"}
# So, experiment wildly, which Python is awesome for, but also aim to write for panicking future-you, who wants to get last-minute final chapter stuff added quickly and painlessly...
# + [markdown] slideshow={"slide_type": "slide"}
# Use `matplotlib` or `bokeh` to add plotting functionality to `better-python.py`. Note that both of them can output to a file (`bokeh` to HTML for interactivity).
# + [markdown] slideshow={"slide_type": "notes"}
# Do this whatever way you want - with title, axes labels, interactivity, line colours, separate functionality for newversion on and off (or just when on). Add notes to Etherpad to suggest original ideas for others also, and let us know your method.
# -
# # docstrings
# # threading & sockets
# # Virtualenv
# # module installation
# # FFT
# # network analysis
# # Generators
# # Breakpoints and variables in Spyder
| Latency.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# +
import os
import pandas as pd
import numpy as np
import sys
from collections import defaultdict
import warnings
warnings.filterwarnings('ignore')
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import gridspec
import matplotlib.patches as patches
import matplotlib.colors as clr
from matplotlib.colors import ListedColormap, LinearSegmentedColormap, BoundaryNorm
from matplotlib import collections as mc
import matplotlib.colors as clr
from matplotlib.ticker import FuncFormatter
import matplotlib.lines as mlines
sys.path.append("scripts/")
import scripts.conf as conf
import scripts.oncotree
conf.config_params()
oncotree = scripts.oncotree.Oncotree()
os.makedirs("raw_plots",exist_ok=True)
os.makedirs("source_data",exist_ok=True)
# -
if not(os.path.exists("source_data/model_information_including_generals.tsv.gz")):
# ! python3 scripts/prepare_table_model_types.py
# # Extended Figure 8, panel 8b
# ### First, load the source data.
df_table_ratios = pd.read_csv("source_data/ratio_models_by_tumor_type_model.tsv",sep="\t")
df_table_ratios.set_index("CANCER_TYPE",inplace=True)
df = pd.read_csv("source_data/unique_observed_mutations_with_type_model.tsv.gz",sep="\t")
# save the total counts (barplots in the heatmap)
d_counts,d_counts_total = defaultdict(lambda: 0.0), defaultdict(lambda: 0.0)
for type_model,ttype,count in df.groupby(["type_model","CANCER_TYPE"],as_index=False).agg({"pos":"count"}).values:
d_counts[(type_model,ttype)] = count
for ttype,count in df.groupby(["CANCER_TYPE"],as_index=False).agg({"pos":"count"}).values:
d_counts_total[ttype] = count
# ### Prepare the data for plotting, including colors
order_categories=df_table_ratios.T.index
order_ttypes=list(df_table_ratios.T.columns.values)
dict_colors = {"gene/ttype specific":"#e6550d","gene specific":"#fdae6b","gene other ttype":"#fee6ce","no-model":"white"}
x=df[["gene","CANCER_TYPE","type_model"]].drop_duplicates().values
ttypes = set(df["CANCER_TYPE"].unique())
d_data = defaultdict(lambda: "white")
for gene,ttype,type_model in x:
d_data[(gene,ttype)]=dict_colors[type_model]
# ### Generate a row per gene...
x=df.groupby(["gene","CANCER_TYPE"]).agg({"pos":"count"}).sort_values("pos",ascending=False)
x.reset_index(inplace=True)
heatmap_total = x.pivot_table(index="gene",values="pos",columns=["CANCER_TYPE"]).fillna(0.0)[order_ttypes]
order_genes = heatmap_total.sum(axis=1).sort_values(ascending=False).index[0:20]
# ### Select top genes (manual selection, as an example, any other gene could be selected)
selected_genes = ["TP53","KRAS","PTEN","PIK3CA","CTNNB1","RB1","FBXW7","ARID1A","VHL","EGFR","IDH1","NOTCH1","APC","BRAF","SMAD4","NF1","NFE2L2","ALK","CDKN2A","KMT2D"]
x=df[df["gene"].isin(selected_genes)].groupby(["gene","CANCER_TYPE"]).agg({"pos":"count"}).sort_values("pos",ascending=False)
x.reset_index(inplace=True)
order_ttypes_n = [ttype for ttype in order_ttypes if (ttype in x["CANCER_TYPE"].unique()) and ttype in df[(df["gene"].isin(selected_genes))&(df["type_model"]=="gene/ttype specific")]["CANCER_TYPE"].unique() ]
heatmap_total = x.pivot_table(index="gene",values="pos",columns=["CANCER_TYPE"]).fillna(0.0)[order_ttypes_n]
count_genes = heatmap_total.sum(axis=1).sort_values(ascending=False)
max_counts=np.nanmax(count_genes.values)
order_genes = list(heatmap_total.sum(axis=1).sort_values(ascending=False).index[0:20])
# ### Function to create the legend...
def create_legend():
c = ["Gene/tumor type specific","Tumor type specific","Gene specific", "Other general"]
colors = ["#762a83","#af8dc3","#1b7837","#7fbf7b"]
l = []
for i in range(0,len(c)):
l.append(mlines.Line2D([], [], color=colors[i], marker='.', markersize=25, label=c[i],lw=0.0))
return l
# ### Render the plot!
# +
fig,ax=plt.subplots(figsize=(10,8))
N=len(order_genes)+1
gs = gridspec.GridSpec(ncols=2,nrows=N,height_ratios=[1 for x in range(0,len(order_genes))]+[3],width_ratios=[20,1],wspace=0.0)
ax0 = plt.subplot(gs[0])
ax1 = plt.subplot(gs[1])
axis = [ax0,ax1]
len_columns=len(order_ttypes_n)
for i in range(2,N*2,2):
axis.append(plt.subplot(gs[i],sharex=ax0))
axis.append(plt.subplot(gs[i+1],sharey=axis[-1]))
i = 0
for j in range(0,len(order_genes)*2,2):
ax = axis[j]
ax.get_xaxis().set_visible(False)
#sizes = [np.nanmin([75,x/3])+35 if x > 0 else 0.0 for x in heatmap_total.loc[order_genes[i]].values]
colors_gene = [d_data[(order_genes[i],v)] for v in order_ttypes_n]
ax.scatter(x=list(range(1,len_columns+1)),y=np.zeros(len_columns),s=80,color=colors_gene,edgecolors=[ "black" if color_gene != "white" else "white" for color_gene in colors_gene ],linewidths=0.5,clip_on=False)
ax.spines["top"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_xticks([])
ax.axhline(y=0,xmin=0.05,xmax=1.1,color="#C0C0C0",lw=0.25,alpha=0.5)
ax.set_ylabel(order_genes[i],fontsize=14,rotation=0,verticalalignment="center", labelpad=25)
ax.set_yticks([])
ax.set_xlabel("")
i = i +1
i = 0
for j in range(1,len(order_genes)*2,2):
ax = axis[j]
ax.get_xaxis().set_visible(False)
#ax.get_yaxis().set_visible(False)
value = count_genes.loc[order_genes[i]]
colors_gene = [d_data[(order_genes[i],v)] for v in order_ttypes_n]
ax.barh(y=0,width=value,color="black",clip_on=True,height=0.8)
ax.annotate(xy=(value+5,-0.2),s=str(int(value)),fontsize=12)
ax.spines["top"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel("")
ax.set_ylabel("")
ax.set_xlim(0,max_counts)
i = i +1
ax=axis[-1]
ax.set_axis_off()
ax=axis[-2]
ax.bar(x=range(1,len(order_ttypes_n)+1),height=[np.log2(d_counts_total[ttype]) for ttype in order_ttypes_n],color="black",clip_on=False,width=0.6)
ax.spines["top"].set_visible(False)
ax.spines["left"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.get_xaxis().set_visible(True)
ax.set_ylabel("Driver mutations\n (log2)",fontsize=11)
ax.set_yticks([0,2,4,6,8,10])
ax.set_yticklabels([1,4,16,64,256,1024],fontsize=11)
ax.set_xticks(list(range(1,len_columns+1)))
ax.set_xticklabels(order_ttypes_n,rotation=90)
ax.tick_params(axis = 'x', labelsize =14 , pad=0.25,width=0.25, length = 1.5)
ax0.set_title("Models used for top mutated genes across intOGen cohorts",fontsize=15)
plt.savefig("raw_plots/top_genes_models_examples.pdf", dpi=800,bbox_inches="tight")
plt.show()
# -
| Extended_Figure_8/display_panels_Extended_Figure_8.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="_ZIaVMDYKE9H"
import pandas as pd
import numpy as np
# + colab={"base_uri": "https://localhost:8080/", "height": 195} id="DJEpGSQEK-8k" outputId="45bb56f9-1743-4935-c1fd-24d5d75acc17"
df = pd.DataFrame(pd.read_csv('train.csv'))
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="tXLs4Pp6LMMN" outputId="8b042c58-5c5c-47d5-a1d0-c6b4e215d74d"
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="y-91VYvtMOuB" outputId="afb63a0b-131b-4b30-a1e1-e3222979567f"
df.isnull().sum()
# + [markdown] id="AtR9L5WFNNVn"
# Seperating out the columns which have more than 35% of the values missing in the dataset
# + colab={"base_uri": "https://localhost:8080/"} id="gqAOQHB9MrVy" outputId="3073add3-99bf-4f87-e67d-01a78e2fda43"
drop_col = df.isnull().sum()[df.isnull().sum()>(35/100 * df.shape[0])]
drop_col
# + colab={"base_uri": "https://localhost:8080/"} id="d2VZJtgdNHAY" outputId="6abc3a4c-d3f3-449f-92fa-e702f08241b5"
drop_col.index
# + colab={"base_uri": "https://localhost:8080/"} id="8f1J-nJbNmRp" outputId="6ee615aa-c404-419b-91aa-71d64ae364e4"
df.drop(drop_col.index, axis=1, inplace=True)
df.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="u20lEz_5N8ir" outputId="2bfc4d79-cd80-458f-d88d-3a9993e1490d"
df.fillna(df.mean(), inplace = True)
df.isnull().sum()
# + [markdown] id="8UCYUddVORYi"
# Because ##Embarked contains string values, we see the details of that column seperately from others as strings does not have mean and all
# + colab={"base_uri": "https://localhost:8080/"} id="CKOiQAFoO2nU" outputId="e012b435-c58e-4946-ae49-afd98336b44b"
df['Embarked'].describe()
# + id="x01R0cW3Oxc-"
df['Embarked'].fillna('S',inplace=True)
# + colab={"base_uri": "https://localhost:8080/"} id="ycUKG_dHPQ0m" outputId="535275f6-0521-457c-f45e-fb3f7bd597bb"
df.isnull().sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 254} id="B_M3xoFSPaGg" outputId="54f508e0-262f-43ca-d36a-6d595c5a3377"
df.corr()
# + [markdown] id="nrq-2zYqPmaS"
# sibsp: Number of Siblings/Spouses Aboard
#
#
# parch: Number of Parents/Children Aboard
#
# so we can make a new column family_size by combining these two columns
# + colab={"base_uri": "https://localhost:8080/", "height": 225} id="tMDctuMsQJKV" outputId="58d65f74-0f16-4f80-f55b-01e8955b4f5f"
df['FamilySize'] = df['SibSp'] + df['Parch']
df.drop(['SibSp', 'Parch'], axis=1, inplace=True)
df.corr()
# + [markdown] id="rEkPictMQyuj"
# **FamilySize** in the ship does not have much coorelance with survival rate
#
# Let's check if we weather the person was alone or not can affect the survival rate.
# + colab={"base_uri": "https://localhost:8080/"} id="6OCKR_mKQyXT" outputId="94fe9be1-43ea-4a11-a42c-62bf061e9fcc"
df['Alone'] = [0 if df['FamilySize'][i]>0 else 1 for i in df.index]
df.head
# + colab={"base_uri": "https://localhost:8080/"} id="gFh5VzYmR0BA" outputId="169b7b88-4b7a-4c7f-efa4-51d8cb4a9a10"
df.groupby(['Alone'])['Survived'].mean()
# + [markdown] id="dn81-TlFSDVN"
# if the person is alone he/she has less chance of surviving
# + colab={"base_uri": "https://localhost:8080/", "height": 106} id="xrsQyWW_SMh4" outputId="b4c60299-0912-4bb7-d146-c836ac73d8be"
df[['Alone','Fare']].corr()
# + [markdown] id="oCMDfQaQSb6y"
# So we can see if the person was not alone, the chance the ticket price is higher as high
# + colab={"base_uri": "https://localhost:8080/"} id="QINUNJS3Ssh6" outputId="a4582384-559b-4b59-87ed-8e96c0525b37"
df['Sex'] = [0 if df['Sex'][i]=='male' else 1 for i in df.index]
df.groupby(['Sex'])['Survived'].mean()
# + [markdown] id="OQIAAAYqULBn"
# it Shows, female passengers have more chance of surviving than male ones.
#
# it shows women were prioritized over men.
# + colab={"base_uri": "https://localhost:8080/"} id="AnWGxyvmUJYA" outputId="735f463e-4246-4795-e94d-53d717dbd99a"
df.groupby(['Embarked'])['Survived'].mean()
# + [markdown] id="nSnmFSlJUs2c"
# # **CONCLUSION**
# + [markdown] id="RDVXO2VsU7pN"
# Female passengers were prioritized over men.
| TITANIC_SURVIVOR_ANALYSIS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from __future__ import division, print_function
# %matplotlib inline
# # scikit-image advanced panorama tutorial
#
# Enhanced from the original demo as featured in [the scikit-image paper](https://peerj.com/articles/453/).
#
# Multiple overlapping images of the same scene, combined into a single image, can yield amazing results. This tutorial will illustrate how to accomplish panorama stitching using scikit-image, from loading the images to cleverly stitching them together.
# ### First things first
#
# Import NumPy and matplotlib, then define a utility function to compare multiple images
# +
import numpy as np
import matplotlib.pyplot as plt
def compare(*images, **kwargs):
"""
Utility function to display images side by side.
Parameters
----------
image0, image1, image2, ... : ndarrray
Images to display.
labels : list
Labels for the different images.
"""
f, axes = plt.subplots(1, len(images), **kwargs)
axes = np.array(axes, ndmin=1)
labels = kwargs.pop('labels', None)
if labels is None:
labels = [''] * len(images)
for n, (image, label) in enumerate(zip(images, labels)):
axes[n].imshow(image, interpolation='nearest', cmap='gray')
axes[n].set_title(label)
axes[n].axis('off')
f.tight_layout()
# -
# ### Load data
#
# The ``ImageCollection`` class provides an easy and efficient way to load and represent multiple images. Images in the ``ImageCollection`` are not only read from disk when accessed.
#
# Load a series of images into an ``ImageCollection`` with a wildcard, as they share similar names.
# +
import skimage.io as io
pano_imgs = io.ImageCollection('../../images/pano/JDW_03*')
# -
# Inspect these images using the convenience function `compare()` defined earlier
compare(*pano_imgs, figsize=(12, 10))
# Credit: Images of Private Arch and the trail to Delicate Arch in Arches National Park, USA, taken by <NAME>.<br>
# License: CC-BY 4.0
#
# ---
# # 0. Pre-processing
#
# This stage usually involves one or more of the following:
# * Resizing, often downscaling with fixed aspect ratio
# * Conversion to grayscale, as many feature descriptors are not defined for color images
# * Cropping to region(s) of interest
#
# For convenience our example data is already resized smaller, and we won't bother cropping. However, they are presently in color so coversion to grayscale with `skimage.color.rgb2gray` is appropriate.
# +
from skimage.color import rgb2gray
pano0, pano1, pano2 = [rgb2gray(im) for im in pano_imgs]
# -
# View the results
compare(pano0, pano1, pano2, figsize=(12, 10))
# # 1. Feature detection and matching
#
# We need to estimate a projective transformation that relates these images together. The steps will be
#
# 1. Define one image as a _target_ or _destination_ image, which will remain anchored while the others are warped
# 2. Detect features in all three images
# 3. Match features from left and right images against the features in the center, anchored image.
#
# In this three-shot series, the middle image `pano1` is the logical anchor point.
#
# We detect "Oriented FAST and rotated BRIEF" (ORB) features in both images.
#
# **Note:** For efficiency, in this tutorial we're finding 800 keypoints. The results are good but small variations are expected. If you need a more robust estimate in practice, run multiple times and pick the best result _or_ generate additional keypoints.
# +
from skimage.feature import ORB
# Initialize ORB
# 800 keypoints is large enough for robust results,
# but low enough to run within a few seconds.
orb = ORB(n_keypoints=800, fast_threshold=0.05)
# Detect keypoints in pano0
orb.detect_and_extract(pano0)
keypoints0 = orb.keypoints
descriptors0 = orb.descriptors
# Detect keypoints in pano1
orb.detect_and_extract(pano1)
keypoints1 = orb.keypoints
descriptors1 = orb.descriptors
# Detect keypoints in pano2
orb.detect_and_extract(pano2)
keypoints2 = orb.keypoints
descriptors2 = orb.descriptors
# -
# Match features from images 0 <-> 1 and 1 <-> 2.
# +
from skimage.feature import match_descriptors
# Match descriptors between left/right images and the center
matches01 = match_descriptors(descriptors0, descriptors1, cross_check=True)
matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True)
# -
# Inspect these matched features side-by-side using the convenience function ``skimage.feature.plot_matches``.
# +
from skimage.feature import plot_matches
fig, ax = plt.subplots(1, 1, figsize=(15, 12))
# Best match subset for pano0 -> pano1
plot_matches(ax, pano0, pano1, keypoints0, keypoints1, matches01)
ax.axis('off');
# -
# Most of these line up similarly, but it isn't perfect. There are a number of obvious outliers or false matches.
# +
fig, ax = plt.subplots(1, 1, figsize=(15, 12))
# Best match subset for pano2 -> pano1
plot_matches(ax, pano1, pano2, keypoints1, keypoints2, matches12)
ax.axis('off');
# -
# Similar to above, decent signal but numerous false matches.
#
# ---
# # 2. Transform estimation
#
# To filter out the false matches, we apply RANdom SAmple Consensus (RANSAC), a powerful method of rejecting outliers available in ``skimage.transform.ransac``. The transformation is estimated iteratively, based on randomly chosen subsets, finally selecting the model which corresponds best with the majority of matches.
#
# We need to do this twice, once each for the transforms left -> center and right -> center.
# +
from skimage.transform import ProjectiveTransform
from skimage.measure import ransac
# Select keypoints from
# * source (image to be registered): pano0
# * target (reference image): pano1, our middle frame registration target
src = keypoints0[matches01[:, 0]][:, ::-1]
dst = keypoints1[matches01[:, 1]][:, ::-1]
model_robust01, inliers01 = ransac((src, dst), ProjectiveTransform,
min_samples=4, residual_threshold=1, max_trials=300)
# Select keypoints from
# * source (image to be registered): pano2
# * target (reference image): pano1, our middle frame registration target
src = keypoints2[matches12[:, 1]][:, ::-1]
dst = keypoints1[matches12[:, 0]][:, ::-1]
model_robust12, inliers12 = ransac((src, dst), ProjectiveTransform,
min_samples=4, residual_threshold=1, max_trials=300)
# -
# The `inliers` returned from RANSAC select the best subset of matches. How do they look?
# +
fig, ax = plt.subplots(1, 1, figsize=(15, 12))
# Best match subset for pano0 -> pano1
plot_matches(ax, pano0, pano1, keypoints0, keypoints1, matches01[inliers01])
ax.axis('off');
# +
fig, ax = plt.subplots(1, 1, figsize=(15, 12))
# Best match subset for pano2 -> pano1
plot_matches(ax, pano1, pano2, keypoints1, keypoints2, matches12[inliers12])
ax.axis('off');
# -
# Most of the false matches are rejected!
#
# ---
# # 3. Warping
#
# Next, we produce the panorama itself. We must _warp_, or transform, two of the three images so they will properly align with the stationary image.
#
# ### Extent of output image
# The first step is to find the shape of the output image to contain all three transformed images. To do this we consider the extents of all warped images.
# +
from skimage.transform import SimilarityTransform
# Shape of middle image, our registration target
r, c = pano1.shape[:2]
# Note that transformations take coordinates in (x, y) format,
# not (row, column), in order to be consistent with most literature
corners = np.array([[0, 0],
[0, r],
[c, 0],
[c, r]])
# Warp the image corners to their new positions
warped_corners01 = model_robust01(corners)
warped_corners12 = model_robust12(corners)
# Find the extents of both the reference image and the warped
# target image
all_corners = np.vstack((warped_corners01, warped_corners12, corners))
# The overall output shape will be max - min
corner_min = np.min(all_corners, axis=0)
corner_max = np.max(all_corners, axis=0)
output_shape = (corner_max - corner_min)
# Ensure integer shape with np.ceil and dtype conversion
output_shape = np.ceil(output_shape[::-1]).astype(int)
# -
# ### Apply estimated transforms
#
# Warp the images with `skimage.transform.warp` according to the estimated models. A shift, or _translation_ is needed to place as our middle image in the middle - it isn't truly stationary.
#
# Values outside the input images are initially set to -1 to distinguish the "background", which is identified for later use.
#
# **Note:** ``warp`` takes the _inverse_ mapping as an input.
# +
from skimage.transform import warp
# This in-plane offset is the only necessary transformation for the middle image
offset1 = SimilarityTransform(translation= -corner_min)
# Translate pano1 into place
pano1_warped = warp(pano1, offset1.inverse, order=3,
output_shape=output_shape, cval=-1)
# Acquire the image mask for later use
pano1_mask = (pano1_warped != -1) # Mask == 1 inside image
pano1_warped[~pano1_mask] = 0 # Return background values to 0
# -
# Warp left panel into place
# +
# Warp pano0 (left) to pano1
transform01 = (model_robust01 + offset1).inverse
pano0_warped = warp(pano0, transform01, order=3,
output_shape=output_shape, cval=-1)
pano0_mask = (pano0_warped != -1) # Mask == 1 inside image
pano0_warped[~pano0_mask] = 0 # Return background values to 0
# -
# Warp right panel into place
# +
# Warp pano2 (right) to pano1
transform12 = (model_robust12 + offset1).inverse
pano2_warped = warp(pano2, transform12, order=3,
output_shape=output_shape, cval=-1)
pano2_mask = (pano2_warped != -1) # Mask == 1 inside image
pano2_warped[~pano2_mask] = 0 # Return background values to 0
# -
# Inspect the warped images:
compare(pano0_warped, pano1_warped, pano2_warped, figsize=(12, 10));
# ---
# # 4. Combining images the easy (and bad) way
#
# This method simply
#
# 1. sums the warped images
# 2. tracks how many images overlapped to create each point
# 3. normalizes the result.
# Add the three images together. This could create dtype overflows!
# We know they are are floating point images after warping, so it's OK.
merged = (pano0_warped + pano1_warped + pano2_warped)
# Track the overlap by adding the masks together
overlap = (pano0_mask * 1.0 + # Multiply by 1.0 for bool -> float conversion
pano1_mask +
pano2_mask)
# Normalize through division by `overlap` - but ensure the minimum is 1
normalized = merged / np.maximum(overlap, 1)
# Finally, view the results!
# +
fig, ax = plt.subplots(figsize=(12, 12))
ax.imshow(normalized, cmap='gray')
plt.tight_layout()
ax.axis('off');
# -
#
# ---
#
# <div style="height: 400px;"></div>
# **What happened?!** Why are there nasty dark lines at boundaries, and why does the middle look so blurry?
#
#
# The **lines are artifacts (boundary effect) from the warping method**. When the image is warped with interpolation, edge pixels containing part image and part background combine these values. We would have bright lines if we'd chosen `cval=2` in the `warp` calls (try it!), but regardless of choice there will always be discontinuities.
#
# ...Unless you use `order=0` in `warp`, which is nearest neighbor. Then edges are perfect (try it!). But who wants to be limited to an inferior interpolation method?
#
# Even then, it's blurry! Is there a better way?
#
# ---
# # 5. Stitching images along a minimum-cost path
#
# Let's step back a moment and consider: Is it even reasonable to blend pixels?
#
# Take a look at a difference image, which is just one image subtracted from the other.
# +
fig, ax = plt.subplots(figsize=(15,12))
# Generate difference image and inspect it
difference_image = pano0_warped - pano1_warped
ax.imshow(difference_image, cmap='gray')
ax.axis('off');
# -
# The surrounding flat gray is zero. _A perfect overlap would show no structure!_
#
# Instead, the overlap region matches fairly well in the middle... but off to the sides where things start to look a little embossed, a simple average blurs the result. This caused the blurring in the previous, method (look again). _Unfortunately, this is almost always the case for panoramas!_
#
# How can we fix this?
#
# Let's attempt to find a vertical path through this difference image which stays as close to zero as possible. If we use that to build a mask, defining a transition between images, the result should appear _seamless_.
#
# ---
# # Seamless image stitching with Minimum-Cost Paths and `skimage.graph`
#
# Among other things, `skimage.graph` allows you to
# * start at any point on an array
# * find the path to any other point in the array
# * the path found _minimizes_ the sum of values on the path.
#
#
# The array is called a _cost array_, while the path found is a _minimum-cost path_ or **MCP**.
#
# To accomplish this we need
#
# * Starting and ending points for the path
# * A cost array (a modified difference image)
#
# This method is so powerful that, with a carefully constructed cost array, the seed points are essentially irrelevant. It just works!
# ### Define seed points
# +
ymax = output_shape[1] - 1
xmax = output_shape[0] - 1
# Start anywhere along the top and bottom, left of center.
mask_pts01 = [[0, ymax // 3],
[xmax, ymax // 3]]
# Start anywhere along the top and bottom, right of center.
mask_pts12 = [[0, 2*ymax // 3],
[xmax, 2*ymax // 3]]
# -
# ### Construct cost array
#
# This utility function exists to give a "cost break" for paths from the edge to the overlap region.
#
# We will visually explore the results shortly. Examine the code later - for now, just use it.
# +
from skimage.measure import label
def generate_costs(diff_image, mask, vertical=True, gradient_cutoff=2.):
"""
Ensures equal-cost paths from edges to region of interest.
Parameters
----------
diff_image : ndarray of floats
Difference of two overlapping images.
mask : ndarray of bools
Mask representing the region of interest in ``diff_image``.
vertical : bool
Control operation orientation.
gradient_cutoff : float
Controls how far out of parallel lines can be to edges before
correction is terminated. The default (2.) is good for most cases.
Returns
-------
costs_arr : ndarray of floats
Adjusted costs array, ready for use.
"""
if vertical is not True:
return tweak_costs(diff_image.T, mask.T, vertical=vertical,
gradient_cutoff=gradient_cutoff).T
# Start with a high-cost array of 1's
costs_arr = np.ones_like(diff_image)
# Obtain extent of overlap
row, col = mask.nonzero()
cmin = col.min()
cmax = col.max()
# Label discrete regions
cslice = slice(cmin, cmax + 1)
labels = label(mask[:, cslice])
# Find distance from edge to region
upper = (labels == 0).sum(axis=0)
lower = (labels == 2).sum(axis=0)
# Reject areas of high change
ugood = np.abs(np.gradient(upper)) < gradient_cutoff
lgood = np.abs(np.gradient(lower)) < gradient_cutoff
# Give areas slightly farther from edge a cost break
costs_upper = np.ones_like(upper, dtype=np.float64)
costs_lower = np.ones_like(lower, dtype=np.float64)
costs_upper[ugood] = upper.min() / np.maximum(upper[ugood], 1)
costs_lower[lgood] = lower.min() / np.maximum(lower[lgood], 1)
# Expand from 1d back to 2d
vdist = mask.shape[0]
costs_upper = costs_upper[np.newaxis, :].repeat(vdist, axis=0)
costs_lower = costs_lower[np.newaxis, :].repeat(vdist, axis=0)
# Place these in output array
costs_arr[:, cslice] = costs_upper * (labels == 0)
costs_arr[:, cslice] += costs_lower * (labels == 2)
# Finally, place the difference image
costs_arr[mask] = diff_image[mask]
return costs_arr
# -
# Use this function to generate the cost array.
# Start with the absolute value of the difference image.
# np.abs is necessary because we don't want negative costs!
costs01 = generate_costs(np.abs(pano0_warped - pano1_warped),
pano0_mask & pano1_mask)
# Allow the path to "slide" along top and bottom edges to the optimal horizontal position by setting top and bottom edges to zero cost.
costs01[0, :] = 0
costs01[-1, :] = 0
# Our cost array now looks like this
# +
fig, ax = plt.subplots(figsize=(15, 12))
ax.imshow(costs01, cmap='gray', interpolation='none')
ax.axis('off');
# -
# The tweak we made with `generate_costs` is subtle but important. Can you see it?
# ### Find the minimum-cost path (MCP)
#
# Use `skimage.graph.route_through_array` to find an optimal path through the cost array
# +
from skimage.graph import route_through_array
# Arguments are:
# cost array
# start pt
# end pt
# can it traverse diagonally
pts, _ = route_through_array(costs01, mask_pts01[0], mask_pts01[1], fully_connected=True)
# Convert list of lists to 2d coordinate array for easier indexing
pts = np.array(pts)
# -
# Did it work?
# +
fig, ax = plt.subplots(figsize=(12, 12))
# Plot the difference image
ax.imshow(pano0_warped - pano1_warped, cmap='gray')
# Overlay the minimum-cost path
ax.plot(pts[:, 1], pts[:, 0])
plt.tight_layout()
ax.axis('off');
# -
# That looks like a great seam to stitch these images together - the path looks very close to zero.
# ### Irregularities
#
# Due to the random element in the RANSAC transform estimation, everyone will have a slightly different path. **Your path will look different from mine, and different from your neighbor's.** That's expected! _The awesome thing about MCP is that everyone just calculated the best possible path to stitch together their unique transforms!_
# ### Filling the mask
#
# Turn that path into a mask, which will be 1 where we want the left image to show through and zero elsewhere. We need to fill the left side of the mask with ones over to our path.
#
# **Note**: This is the inverse of NumPy masked array conventions (``numpy.ma``), which specify a negative mask (mask == bad/missing) rather than a positive mask as used here (mask == good/selected).
#
# Place the path into a new, empty array.
# Start with an array of zeros and place the path
mask0 = np.zeros_like(pano0_warped, dtype=np.uint8)
mask0[pts[:, 0], pts[:, 1]] = 1
# Ensure the path appears as expected
# +
fig, ax = plt.subplots(figsize=(11, 11))
# View the path in black and white
ax.imshow(mask0, cmap='gray')
ax.axis('off');
# -
# Label the various contiguous regions in the image using `skimage.measure.label`
# +
from skimage.measure import label
# Labeling starts with one at point (0, 0)
mask0 = (label(mask0, connectivity=1, background=-1) == 1)
# The result
plt.imshow(mask0, cmap='gray');
# -
# Looks great!
#
# ### Rinse and repeat
#
# Apply the same principles to images 1 and 2: first, build the cost array
# +
# Start with the absolute value of the difference image.
# np.abs necessary because we don't want negative costs!
costs12 = generate_costs(np.abs(pano1_warped - pano2_warped),
pano1_mask & pano2_mask)
# Allow the path to "slide" along top and bottom edges to the optimal
# horizontal position by setting top and bottom edges to zero cost
costs12[0, :] = 0
costs12[-1, :] = 0
# -
# **Add an additional constraint this time**, to prevent this path crossing the prior one!
costs12[mask0 > 0] = 1
# Check the result
fig, ax = plt.subplots(figsize=(8, 8))
ax.imshow(costs12, cmap='gray');
# Your results may look slightly different.
#
# Compute the minimal cost path
# +
# Arguments are:
# cost array
# start pt
# end pt
# can it traverse diagonally
pts, _ = route_through_array(costs12, mask_pts12[0], mask_pts12[1], fully_connected=True)
# Convert list of lists to 2d coordinate array for easier indexing
pts = np.array(pts)
# -
# Verify a reasonable result
# +
fig, ax = plt.subplots(figsize=(12, 12))
# Plot the difference image
ax.imshow(pano1_warped - pano2_warped, cmap='gray')
# Overlay the minimum-cost path
ax.plot(pts[:, 1], pts[:, 0]);
ax.axis('off');
# -
# Initialize the mask by placing the path in a new array
mask2 = np.zeros_like(pano0_warped, dtype=np.uint8)
mask2[pts[:, 0], pts[:, 1]] = 1
# Fill the right side this time, again using `skimage.measure.label` - the label of interest is 2
# +
mask2 = (label(mask2, connectivity=1, background=-1) == 3)
# The result
plt.imshow(mask2, cmap='gray');
# -
# ### Final mask
#
# The last mask for the middle image is one of exclusion - it will be displayed everywhere `mask0` and `mask2` are not.
mask1 = ~(mask0 | mask2).astype(bool)
# Define a convenience function to place masks in alpha channels
def add_alpha(img, mask=None):
"""
Adds a masked alpha channel to an image.
Parameters
----------
img : (M, N[, 3]) ndarray
Image data, should be rank-2 or rank-3 with RGB channels
mask : (M, N[, 3]) ndarray, optional
Mask to be applied. If None, the alpha channel is added
with full opacity assumed (1) at all locations.
"""
from skimage.color import gray2rgb
if mask is None:
mask = np.ones_like(img)
if img.ndim == 2:
img = gray2rgb(img)
return np.dstack((img, mask))
# Obtain final, alpha blended individual images and inspect them
# +
pano0_final = add_alpha(pano0_warped, mask0)
pano1_final = add_alpha(pano1_warped, mask1)
pano2_final = add_alpha(pano2_warped, mask2)
compare(pano0_final, pano1_final, pano2_final, figsize=(12, 12))
# -
# What we have here is the world's most complicated and precisely-fitting jigsaw puzzle...
#
# Plot all three together and view the results!
# +
fig, ax = plt.subplots(figsize=(12, 12))
# This is a perfect combination, but matplotlib's interpolation
# makes it appear to have gaps. So we turn it off.
ax.imshow(pano0_final, interpolation='none')
ax.imshow(pano1_final, interpolation='none')
ax.imshow(pano2_final, interpolation='none')
fig.tight_layout()
ax.axis('off');
# -
# Fantastic! Without the black borders, you'd never know this was composed of separate images!
#
# ---
# # Bonus round: now, in color!
#
# We converted to grayscale for ORB feature detection, back in the initial **preprocessing** steps. Since we stored our transforms and masks, adding color is straightforward!
#
# Transform the colored images
# +
# Identical transforms as before, except
# * Operating on original color images
# * filling with cval=0 as we know the masks
pano0_color = warp(pano_imgs[0], (model_robust01 + offset1).inverse, order=3,
output_shape=output_shape, cval=0)
pano1_color = warp(pano_imgs[1], offset1.inverse, order=3,
output_shape=output_shape, cval=0)
pano2_color = warp(pano_imgs[2], (model_robust12 + offset1).inverse, order=3,
output_shape=output_shape, cval=0)
# -
# Then apply the custom alpha channel masks
pano0_final = add_alpha(pano0_color, mask0)
pano1_final = add_alpha(pano1_color, mask1)
pano2_final = add_alpha(pano2_color, mask2)
# View the result!
# +
fig, ax = plt.subplots(figsize=(12, 12))
# Turn off matplotlib's interpolation
ax.imshow(pano0_final, interpolation='none')
ax.imshow(pano1_final, interpolation='none')
ax.imshow(pano2_final, interpolation='none')
fig.tight_layout()
ax.axis('off');
# -
# Save the combined, color panorama locally as `'./pano-advanced-output.png'`
# +
from skimage.color import gray2rgb
# Start with empty image
pano_combined = np.zeros_like(pano0_color)
# Place the masked portion of each image into the array
# masks are 2d, they need to be (M, N, 3) to match the color images
pano_combined += pano0_color * gray2rgb(mask0)
pano_combined += pano1_color * gray2rgb(mask1)
pano_combined += pano2_color * gray2rgb(mask2)
# Save the output - precision loss warning is expected
# moving from floating point -> uint8
io.imsave('./pano-advanced-output.png', pano_combined)
# -
#
# ---
#
# <div style="height: 400px;"></div>
# <div style="height: 400px;"></div>
# #Once more, from the top
#
# I hear what you're saying. "Those were too easy! The panoramas had too much overlap! Does this still work in the real world?"
# **Go back to the top. Under "Load Data" replace the string `'data/JDW_03*'` with `'data/JDW_9*'`, and re-run all of the cells in order.**
#
# ---
#
# <div style="height: 400px;"></div>
# %reload_ext load_style
# %load_style ../../themes/tutorial.css
| lectures/solutions/adv3_panorama-stitching-solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://www.kaggle.com/alexteboul/tutorial-part-2-model-building-using-the-metadata?scriptVersionId=89713829" target="_blank"><img align="left" alt="Kaggle" title="Open in Kaggle" src="https://kaggle.com/static/images/open-in-kaggle.svg"></a>
# + [markdown] _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 0.043931, "end_time": "2022-03-10T02:29:09.334453", "exception": false, "start_time": "2022-03-10T02:29:09.290522", "status": "completed"} tags=[]
# # Tutorial Part 2: Model Building using the Metadata
#
# Hello and welcome to Part 2 of this Pawpularity Contest Tutorial Series. In [Tutorial Part 1: EDA for Beginners ](https://www.kaggle.com/alexteboul/tutorial-part-1-eda-for-beginners/notebook), we covered the exploratory data analysis process from start to finish for the PetFinder.my Pawpularity Contest. If you'd like to learn more about how to explore and visualize both the metadata and image data for this competition, check out that notebook.
#
# **To recap some important findings from that exploration:**
# * The feature variables are all binary, 0s or 1s, while the target variable (Pawpularity) is 1-100.
# * The feature variables appear to have similar distributions of pawpularity scores, between themselves and their respective classes (0s vs 1s).
# * The images are hard to differentiate in terms of Pawpularity score just by looking at them.
#
# In this notebook, I'm going to cover how to build some simple models using the metadata provided in the competition. By metadata, I'm referring to the .csv training data in this contest. According to the competition hosts, this data was created manually by people looking at the pet pictures and has features like Eyes, Group, Blur, Face, etc.
#
# **In this notebook you'll learn about:**
# * Decision Tree Classification
# * Decision Tree Regression
# * Ordinary Least Squares Regression
# * Ridge Regression
# * Bernoulli Naive Bayes Classification
# * Random Forest Regression
# * Histogram-based Gradient Boosting Regression (LightGBM)
# * How to evaluate your models
# * Plan for next steps
#
# **TLDR:**
# * All the models kind of suck and it's the metadata-Pawpularity relationship that is bad. Unlikely that any amount of parameter tuning or different models will make these too much better. Best bet is to use the images themselves. Which I will demonstrate in Tutorial Part 3.
#
# | Model | RMSE |
# |--------------------------------------------------|--------|
# | 4.1 Decision Tree Regression | 20.857 |
# | 4.2 Decision Tree Classification | 22.900 |
# | 5.1 Ordinary Least Square Regression | 20.827 |
# | 5.2 Ridge Regression | 20.827 |
# | 6.1 Bernoulli Naive Bayes Classification | 23.468 |
# | 7.1 Random Forest Regression | 20.838 |
# | 7.2 Histogram-based Gradient Boosting Regression | 20.924 |
#
# **Index:**
# 1. Load in your packages
# 2. Is this a Classification or Regression Problem?
# 3. Get the Data
# 4. Decision Trees
# - 4.1 Decision Tree Regressor
# - 4.2 Decision Tree Classifier
# 5. Linear Models
# - 5.1 Ordinary Least Squares Regression
# - 5.2 Ridge Regression
# 6. Naive Bayes
# - 6.1 Bernoulli Naive Bayes
# 7. Ensemble Methods
# - 7.1 Random Forest Regression
# - 7.2 Histogram-based Gradient Boosting Regression
# 8. Conclusion
#
#
# **Tutorials so far:**
#
# 1. **In [Tutorial Part 1: EDA for Beginners](https://www.kaggle.com/alexteboul/tutorial-part-1-eda-for-beginners)**, we covered the exploratory data analysis process from start to finish for the PetFinder.my Pawpularity Contest.
# 2. **In [Tutorial Part 2: Model Building using the Metadata](https://www.kaggle.com/alexteboul/tutorial-part-2-model-building-using-the-metadata)**, we built models using the metadata (.csv data) provided by the competition hosts. Specifically, we tried Decision Tree Classification, Decision Tree Regression, Ordinary Least Squares Regression, Ridge Regression, Bernoulli Naive Bayes Classification, Random Forest Regression, and Histogram-based Gradient Boosting Regression (LightGBM). RMSE 20.54
# 3. **In [Tutorial Part 3: CNN Image Models 1](https://www.kaggle.com/alexteboul/tutorial-part-3-cnn-image-modeling-1)**, we explored preprocessing the training images, explaining the data types necessary to model with images, a basic Convolutional Neural Network architecture, and submitting predictions.
# + [markdown] papermill={"duration": 0.050511, "end_time": "2022-03-10T02:29:09.432445", "exception": false, "start_time": "2022-03-10T02:29:09.381934", "status": "completed"} tags=[]
# -------
# + [markdown] papermill={"duration": 0.042947, "end_time": "2022-03-10T02:29:09.518416", "exception": false, "start_time": "2022-03-10T02:29:09.475469", "status": "completed"} tags=[]
# ## 1: Load in your packages
# + papermill={"duration": 1.850483, "end_time": "2022-03-10T02:29:11.416006", "exception": false, "start_time": "2022-03-10T02:29:09.565523", "status": "completed"} tags=[]
#load in packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import seaborn as sns
import time
import math
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
#Models
from sklearn import tree
from sklearn.tree import DecisionTreeRegressor
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.naive_bayes import BernoulliNB
from sklearn.ensemble import RandomForestRegressor
from sklearn.experimental import enable_hist_gradient_boosting
from sklearn.ensemble import HistGradientBoostingRegressor
#use sklearn.metrics.mean_squared_error() AND math.sqrt() to get RMSE
from sklearn.metrics import mean_squared_error
#set a random_state to be used in the notebook
random_state = 7
# + [markdown] papermill={"duration": 0.045115, "end_time": "2022-03-10T02:29:11.505373", "exception": false, "start_time": "2022-03-10T02:29:11.460258", "status": "completed"} tags=[]
# ------
# + [markdown] papermill={"duration": 0.041534, "end_time": "2022-03-10T02:29:11.591776", "exception": false, "start_time": "2022-03-10T02:29:11.550242", "status": "completed"} tags=[]
# ## 2: Is this a Classification or Regression Problem?
# One of the first questions to ask in the model building process is to determine what type of machine learning (ML) problem you are trying to solve. Because we are given example data with both Pawpularity scores and feature data in the metadata .csv files, we know that we can at least treat this as a supervised learning problem. We have labels --> okay cool it's supervised learning.
#
# Next, we need to decide whether this is a classication (categorical variable prediction) or regression (continous variable prediction). As it turns out, we can do either method in this case. We can treat the Pawpularity score as a continous variable for our regression models to predict, or as a categorical variable between 1-100 for our classification models to predict.
#
# When our predictions are judged however, the Root Mean Square Error (RMSE) metric will be used to evaluate model success. RMSE is the square root of the variance of the residuals (your prediction errors). More plainly, residuals are a measure of how far off from the regression line data points are. The RMSE is a metric that evaluates how spread apart these residuals actually are. Basically, the smaller the RMSE you get, the better. Theoretically, if your model perfectly predicted everything, RMSE would be 0. RMSE is typically associated with regression evaluation.
#
# $$RMSE = \sqrt{\frac{1}{n}\Sigma_{i=1}^{n}{\Big(\frac{d_i -f_i}{\sigma_i}\Big)^2}}$$
# + papermill={"duration": 0.057417, "end_time": "2022-03-10T02:29:11.694786", "exception": false, "start_time": "2022-03-10T02:29:11.637369", "status": "completed"} tags=[]
#Example of RMSE
example_y_actual = [10,20,30,40,50,60,70,80,90,100]
example_y_predicted = [12,20,35,45,50,30,50,80,92,95]
example_MSE = mean_squared_error(example_y_actual, example_y_predicted)
example_RMSE = math.sqrt(example_MSE)
print("Example Mean Squared Error:", round(example_MSE,1))
print("Example Root Mean Square Error:", round(example_RMSE,1))
# + [markdown] papermill={"duration": 0.043217, "end_time": "2022-03-10T02:29:11.782943", "exception": false, "start_time": "2022-03-10T02:29:11.739726", "status": "completed"} tags=[]
# -------
# + [markdown] papermill={"duration": 0.04888, "end_time": "2022-03-10T02:29:11.882355", "exception": false, "start_time": "2022-03-10T02:29:11.833475", "status": "completed"} tags=[]
# ## 3: Get the data
# First, load the data in from the train.csv so we can start training some models. We're going to be training and testing multiple models to get a sense of what works. It helps to start with this train/test splitting so that you can reuse the groups for each model.
#
# Let's start with an 80-20 train-test split. So 80% of the train.csv will be used to train our models and 20% will be used to test how well the models work with new data.
# + papermill={"duration": 0.123406, "end_time": "2022-03-10T02:29:12.04804", "exception": false, "start_time": "2022-03-10T02:29:11.924634", "status": "completed"} tags=[]
#load in the data:
#source path (where the Pawpularity contest data resides)
path = '../input/petfinder-pawpularity-score/'
#Get the metadata (the .csv data) and put it into DataFrames
train_df = pd.read_csv(path + 'train.csv')
#show the dimensions of the train metadata.
print('train_df dimensions: ', train_df.shape)
train_df.head(3)
# + [markdown] papermill={"duration": 0.042193, "end_time": "2022-03-10T02:29:12.137143", "exception": false, "start_time": "2022-03-10T02:29:12.09495", "status": "completed"} tags=[]
# We have our data in a dataframe, so let's store our feature variables in X and target variable in y. Note that Id has to be removed from the feature variables. We just want the columns our models can use.
# + papermill={"duration": 0.063903, "end_time": "2022-03-10T02:29:12.243786", "exception": false, "start_time": "2022-03-10T02:29:12.179883", "status": "completed"} tags=[]
#select Pawpularity as target variable:
y = train_df['Pawpularity']
#select all the other columns minus Id and Pawpualarirty as the feature variables:
X = train_df.drop(['Id','Pawpularity'],axis=1)
# + papermill={"duration": 0.061611, "end_time": "2022-03-10T02:29:12.348126", "exception": false, "start_time": "2022-03-10T02:29:12.286515", "status": "completed"} tags=[]
#now make the train-test splits
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=random_state)
print('Dimensions: \n x_train:{} \n x_test{} \n y_train{} \n y_test{}'.format(x_train.shape, x_test.shape, y_train.shape, y_test.shape))
# + papermill={"duration": 0.060567, "end_time": "2022-03-10T02:29:12.454305", "exception": false, "start_time": "2022-03-10T02:29:12.393738", "status": "completed"} tags=[]
#x_train and x_test are dataframes
x_train.head(2)
#x_test.head(2)
# + papermill={"duration": 0.059336, "end_time": "2022-03-10T02:29:12.557776", "exception": false, "start_time": "2022-03-10T02:29:12.49844", "status": "completed"} tags=[]
#y_train and y_test are pandas series. .iloc[0:2] shows the first two elements of the series. In our case a pawpularity score of 26 and 11.
y_train.iloc[0:2]
#y_test.iloc[0]
# + [markdown] papermill={"duration": 0.046688, "end_time": "2022-03-10T02:29:12.650368", "exception": false, "start_time": "2022-03-10T02:29:12.60368", "status": "completed"} tags=[]
# -------
# + [markdown] papermill={"duration": 0.04464, "end_time": "2022-03-10T02:29:12.74082", "exception": false, "start_time": "2022-03-10T02:29:12.69618", "status": "completed"} tags=[]
# ## 4: Decision Trees
# Decision trees are one of the most basic machine learning algorithms and on relatively small datasets they can be a good place to start. One interesting feature of decision trees is that they can be used for both regression and classification supervised learning problems. Check out this great decision trees explanation on KDNuggets: [Link](https://www.kdnuggets.com/2020/01/decision-tree-algorithm-explained.html)
# + [markdown] papermill={"duration": 0.047019, "end_time": "2022-03-10T02:29:12.832605", "exception": false, "start_time": "2022-03-10T02:29:12.785586", "status": "completed"} tags=[]
# ### 4.1 Decision Tree Regressor
# * Play around with the DecisionTreeRegressor parameters: [SciKit Documentation - Decision Tree Regressor](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html)
# + papermill={"duration": 0.067199, "end_time": "2022-03-10T02:29:12.946897", "exception": false, "start_time": "2022-03-10T02:29:12.879698", "status": "completed"} tags=[]
#create the Regressor
tree_reg = DecisionTreeRegressor(max_depth = 3, min_samples_split = 10)
#train the model
start = time.time()
tree_reg.fit(x_train, y_train)
stop = time.time()
#predict the response for the test data
tree_reg_pred = tree_reg.predict(x_test)
#print the RMSE
print(f'Training time: {round((stop - start),3)} seconds')
tree_reg_RMSE = math.sqrt(mean_squared_error(y_test, tree_reg_pred))
print(f'tree_reg_RMSE: {round(tree_reg_RMSE,3)}')
# + [markdown] papermill={"duration": 0.046842, "end_time": "2022-03-10T02:29:13.043389", "exception": false, "start_time": "2022-03-10T02:29:12.996547", "status": "completed"} tags=[]
# Cool, we got a RMSE for our first model! Note that 20.857 is pretty bad, which suggests right off the bat that our initial assumptions about the models being unlikely to fit the data well were on track.
# + papermill={"duration": 1.386874, "end_time": "2022-03-10T02:29:14.479633", "exception": false, "start_time": "2022-03-10T02:29:13.092759", "status": "completed"} tags=[]
#visualize the decision tree
fig = plt.figure(figsize=(15,5))
plot = tree.plot_tree(tree_reg, feature_names=x_train.columns.values.tolist(), filled=True)
# + papermill={"duration": 1.405007, "end_time": "2022-03-10T02:29:15.935745", "exception": false, "start_time": "2022-03-10T02:29:14.530738", "status": "completed"} tags=[]
#let's see what our predictions look like vs the actual
def ActualvPredictionsGraph(y_test,y_pred,title):
if max(y_test) >= max(y_pred):
my_range = int(max(y_test))
else:
my_range = int(max(y_pred))
plt.figure(figsize=(12,3))
plt.scatter(range(len(y_test)), y_test, color='blue')
plt.scatter(range(len(y_pred)), y_pred, color='red')
plt.xlabel('Index ')
plt.ylabel('Pawpularity ')
plt.title(title,fontdict = {'fontsize' : 15})
plt.legend(handles = [mpatches.Patch(color='red', label='prediction'),mpatches.Patch(color='blue', label='actual')])
plt.show()
return
#plot it
ActualvPredictionsGraph(y_test[0:50], tree_reg_pred[0:50], "First 50 Actual v. Predicted")
ActualvPredictionsGraph(y_test, tree_reg_pred, "All Actual v. Predicted")
#plot actual v predicted in histogram form
plt.figure(figsize=(12,4))
sns.histplot(tree_reg_pred,color='r',alpha=0.3,stat='probability', kde=True)
sns.histplot(y_test,color='b',alpha=0.3,stat='probability', kde=True)
plt.legend(labels=['prediction','actual'])
plt.title('Actual v Predict Distribution')
plt.show()
# + [markdown] papermill={"duration": 0.053752, "end_time": "2022-03-10T02:29:16.042375", "exception": false, "start_time": "2022-03-10T02:29:15.988623", "status": "completed"} tags=[]
# When you're actually submitting to the competition, you'd make the predictions on the test data from the competition. This is different from the test data we created to help with our model evaluation just now.
#
# **How to submit to the competition:**
# * #gets the data
# * test_df = pd.read_csv('../input/petfinder-pawpularity-score/test.csv')
# * #drops the Id column from the test_df dataframe (it already doesn't have Pawpularity so no need to remove that)
# * x_test_submission = test_df.drop(['Id'],axis=1)
# * #predict with a model you've trained, in this case tree_reg, and add the predictions to the test_df dataframe
# * test_df['Pawpularity'] = tree_reg.predict(x_test_submission)
# * #keep just the Id and Pawpularity score for the submission
# * submission_df = test_df[['Id','Pawpularity']]
# * #save it to a .csv file called submission.csv
# * submission_df.to_csv("submission.csv", index=False)
# + papermill={"duration": 0.085885, "end_time": "2022-03-10T02:29:16.178681", "exception": false, "start_time": "2022-03-10T02:29:16.092796", "status": "completed"} tags=[]
#Now submit to the competition using the model:
test_df = pd.read_csv('../input/petfinder-pawpularity-score/test.csv') #gets the data
x_test_submission = test_df.drop(['Id'],axis=1) #drops the Id column from the test_df dataframe (it already doesn't have Pawpularity so no need to remove that)
test_df['Pawpularity'] = tree_reg.predict(x_test_submission) #predict with a model you've trained, in this case tree_reg, and add the predictions to the test_df dataframe
submission_df = test_df[['Id','Pawpularity']] #keep just the Id and Pawpularity score for the submission
submission_df.to_csv("submission.csv", index=False) #save it to a .csv file called submission.csv
submission_df.head()
# + [markdown] papermill={"duration": 0.05049, "end_time": "2022-03-10T02:29:16.279799", "exception": false, "start_time": "2022-03-10T02:29:16.229309", "status": "completed"} tags=[]
# ### 4.2 Decision Tree Classifier
# * Play around with the DecisionTreeClassifier parameters: [SciKit Documentation - Decision Tree Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html)
# + papermill={"duration": 0.081989, "end_time": "2022-03-10T02:29:16.412609", "exception": false, "start_time": "2022-03-10T02:29:16.33062", "status": "completed"} tags=[]
#create the Classifier
tree_clf = DecisionTreeClassifier(max_depth = 3, min_samples_split = 10)
#train the model
start = time.time()
tree_clf.fit(x_train, y_train)
stop = time.time()
#predict the response for the test data
tree_clf_pred = tree_clf.predict(x_test)
#print the RMSE
print(f'Training time: {round((stop - start),3)} seconds')
tree_clf_RMSE = math.sqrt(mean_squared_error(y_test, tree_clf_pred))
print(f'tree_clf_RMSE: {round(tree_clf_RMSE,3)}')
# + [markdown] papermill={"duration": 0.06571, "end_time": "2022-03-10T02:29:16.552665", "exception": false, "start_time": "2022-03-10T02:29:16.486955", "status": "completed"} tags=[]
# Again pretty a pretty bad model, but just shows that you can use a decision tree for both classification and regression.
# + papermill={"duration": 1.221204, "end_time": "2022-03-10T02:29:17.830496", "exception": false, "start_time": "2022-03-10T02:29:16.609292", "status": "completed"} tags=[]
#plot it
ActualvPredictionsGraph(y_test[0:50], tree_clf_pred[0:50], "First 50 Actual v. Predicted")
ActualvPredictionsGraph(y_test, tree_clf_pred, "All Actual v. Predicted")
#plot actual v predicted in histogram form
plt.figure(figsize=(12,4))
sns.histplot(tree_clf_pred,color='r',alpha=0.3,stat='probability', kde=True)
sns.histplot(y_test,color='b',alpha=0.3,stat='probability', kde=True)
plt.legend(labels=['prediction','actual'])
plt.title('Actual v Predict Distribution')
plt.show()
# + [markdown] papermill={"duration": 0.057567, "end_time": "2022-03-10T02:29:17.9445", "exception": false, "start_time": "2022-03-10T02:29:17.886933", "status": "completed"} tags=[]
# **Important:** Note that the classifier is really just predicting values are going to fall within a narrow range. It's doing a poor job of actually predicting based on how different pet features might be impacting the Pawpularity score. Instead, the model has learned that guessing near the mean Pawpularity scores is a good way to minimize the error. We do not want this!
# + papermill={"duration": 0.084362, "end_time": "2022-03-10T02:29:18.086813", "exception": false, "start_time": "2022-03-10T02:29:18.002451", "status": "completed"} tags=[]
#Now submit to the competition using the model:
test_df = pd.read_csv('../input/petfinder-pawpularity-score/test.csv')
x_test_submission = test_df.drop(['Id'],axis=1)
test_df['Pawpularity'] = tree_clf.predict(x_test_submission)
submission_df = test_df[['Id','Pawpularity']]
submission_df.to_csv("submission.csv", index=False)
submission_df.head()
# + [markdown] papermill={"duration": 0.05931, "end_time": "2022-03-10T02:29:18.201855", "exception": false, "start_time": "2022-03-10T02:29:18.142545", "status": "completed"} tags=[]
# **Decision Trees Summary:** Decision Trees produced pretty poor models. But it's always good to know what doesn't work as well as what does.
#
# * **tree_reg_RMSE:** 20.857
# * **tree_clf_RMSE:** 22.900
# + [markdown] papermill={"duration": 0.064526, "end_time": "2022-03-10T02:29:18.326955", "exception": false, "start_time": "2022-03-10T02:29:18.262429", "status": "completed"} tags=[]
# --------
# + [markdown] papermill={"duration": 0.064102, "end_time": "2022-03-10T02:29:18.456132", "exception": false, "start_time": "2022-03-10T02:29:18.39203", "status": "completed"} tags=[]
# ## 5: Linear Models
# * Play around with the Linear Models and their parameters: [SciKit Documentation - Linear Models](https://scikit-learn.org/stable/modules/linear_model.html)
# + [markdown] papermill={"duration": 0.059582, "end_time": "2022-03-10T02:29:18.580724", "exception": false, "start_time": "2022-03-10T02:29:18.521142", "status": "completed"} tags=[]
# ### 5.1 Ordinary Least Squares Regression
# Most basic Linear Model is just linear regression. Multicollinearity problems, or the features being overly similar, have a big impact on these models. Feature selection is an important part of using linear models, but we're going to skip this for the sake of simplicity.
# * Tune the OLS linear regression model and find more info: [SciKit Documentation - Linear Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression)
# + papermill={"duration": 0.095121, "end_time": "2022-03-10T02:29:18.740542", "exception": false, "start_time": "2022-03-10T02:29:18.645421", "status": "completed"} tags=[]
#create the Ordinary Least Squares Regression model
ols_reg = LinearRegression()
#train the model
start = time.time()
ols_reg.fit(x_train, y_train)
stop = time.time()
#predict the response for the test data
ols_reg_pred = ols_reg.predict(x_test)
#print the RMSE
print(f'Training time: {round((stop - start),3)} seconds')
ols_reg_RMSE = math.sqrt(mean_squared_error(y_test, ols_reg_pred))
print(f'ols_reg_RMSE: {round(ols_reg_RMSE,3)}')
# + papermill={"duration": 1.647478, "end_time": "2022-03-10T02:29:20.502134", "exception": false, "start_time": "2022-03-10T02:29:18.854656", "status": "completed"} tags=[]
#plot it
ActualvPredictionsGraph(y_test[0:50], ols_reg_pred[0:50], "First 50 Actual v. Predicted")
ActualvPredictionsGraph(y_test, ols_reg_pred, "All Actual v. Predicted")
#plot actual v predicted in histogram form
plt.figure(figsize=(12,4))
sns.histplot(ols_reg_pred,color='r',alpha=0.3,stat='probability', kde=True)
sns.histplot(y_test,color='b',alpha=0.3,stat='probability', kde=True)
plt.legend(labels=['prediction','actual'])
plt.title('Actual v Predict Distribution')
plt.show()
# + [markdown] papermill={"duration": 0.068149, "end_time": "2022-03-10T02:29:20.632476", "exception": false, "start_time": "2022-03-10T02:29:20.564327", "status": "completed"} tags=[]
# So the ordinary least squares linear regression model is even more narrow in its predictions than the decision trees - not great.
# + papermill={"duration": 0.087391, "end_time": "2022-03-10T02:29:20.786511", "exception": false, "start_time": "2022-03-10T02:29:20.69912", "status": "completed"} tags=[]
#Now submit to the competition using the model:
test_df = pd.read_csv('../input/petfinder-pawpularity-score/test.csv')
x_test_submission = test_df.drop(['Id'],axis=1)
test_df['Pawpularity'] = ols_reg.predict(x_test_submission)
submission_df = test_df[['Id','Pawpularity']]
submission_df.to_csv("submission.csv", index=False)
submission_df.head()
# + [markdown] papermill={"duration": 0.068686, "end_time": "2022-03-10T02:29:20.923642", "exception": false, "start_time": "2022-03-10T02:29:20.854956", "status": "completed"} tags=[]
# ### 5.2 Ridge Regression
# Another basic regression model is called ridge regression. Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of the coefficients. The ridge coefficients minimize a penalized residual sum of squares. Basically, the larger the value of alpha the greater the amount of shrinkage and thus the coefficients become more robust to collinearity. Let's see if this improves the model at all.
# * For more on the documentation: [SciKit Learn Documentation - Ridge Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html#sklearn.linear_model.Ridge)
# + papermill={"duration": 0.086659, "end_time": "2022-03-10T02:29:21.075037", "exception": false, "start_time": "2022-03-10T02:29:20.988378", "status": "completed"} tags=[]
#create the Ridge Regression model
ridge_reg = Ridge(alpha=2.0)
#train the model
start = time.time()
ridge_reg.fit(x_train, y_train)
stop = time.time()
#predict the response for the test data
ridge_reg_pred = ridge_reg.predict(x_test)
#print the RMSE
print(f'Training time: {round((stop - start),3)} seconds')
ridge_reg_RMSE = math.sqrt(mean_squared_error(y_test, ridge_reg_pred))
print(f'ridge_reg_RMSE: {round(ridge_reg_RMSE,3)}')
# + papermill={"duration": 1.758581, "end_time": "2022-03-10T02:29:22.946996", "exception": false, "start_time": "2022-03-10T02:29:21.188415", "status": "completed"} tags=[]
#plot it
ActualvPredictionsGraph(y_test[0:50], ridge_reg_pred[0:50], "First 50 Actual v. Predicted")
ActualvPredictionsGraph(y_test, ridge_reg_pred, "All Actual v. Predicted")
#plot actual v predicted in histogram form
plt.figure(figsize=(12,4))
sns.histplot(ridge_reg_pred,color='r',alpha=0.3,stat='probability', kde=True)
sns.histplot(y_test,color='b',alpha=0.3,stat='probability',kde=True)
plt.legend(labels=['prediction','actual'])
plt.title('Actual v Predict Distribution')
plt.show()
# + [markdown] papermill={"duration": 0.068183, "end_time": "2022-03-10T02:29:23.081773", "exception": false, "start_time": "2022-03-10T02:29:23.01359", "status": "completed"} tags=[]
# Interesting, it's exactly the same as the OLS model. Think on why this is.
# + papermill={"duration": 0.090748, "end_time": "2022-03-10T02:29:23.239776", "exception": false, "start_time": "2022-03-10T02:29:23.149028", "status": "completed"} tags=[]
#Now submit to the competition using the model:
test_df = pd.read_csv('../input/petfinder-pawpularity-score/test.csv')
x_test_submission = test_df.drop(['Id'],axis=1)
test_df['Pawpularity'] = ridge_reg.predict(x_test_submission)
submission_df = test_df[['Id','Pawpularity']]
submission_df.to_csv("submission.csv", index=False)
submission_df.head()
# + [markdown] papermill={"duration": 0.064578, "end_time": "2022-03-10T02:29:23.370086", "exception": false, "start_time": "2022-03-10T02:29:23.305508", "status": "completed"} tags=[]
# Turns out the Ridge Regression is just as bad as the Ordinarly Least Squares Regression model. Let's try something that will better approximate and learn from the distribution of Pawpularity scores.
#
# **Linear Models Summary:**
# * **ols_reg_RMSE:** 20.827
# * **ridge_reg_RMSE:** 20.827
# + [markdown] papermill={"duration": 0.064811, "end_time": "2022-03-10T02:29:23.507877", "exception": false, "start_time": "2022-03-10T02:29:23.443066", "status": "completed"} tags=[]
# ------------
# + [markdown] papermill={"duration": 0.066048, "end_time": "2022-03-10T02:29:23.644156", "exception": false, "start_time": "2022-03-10T02:29:23.578108", "status": "completed"} tags=[]
# ## 6: Naive Bayes
# Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem. They have the “naive” assumption of conditional independence between every pair of features given the value of the target variable. They also tend to be reasonably fast.
# * Learn more about Naive Bayes and play with the parameters: [SciKit Documentation - Naive Bayes Classifier](https://scikit-learn.org/stable/modules/naive_bayes.html)
# + [markdown] papermill={"duration": 0.066462, "end_time": "2022-03-10T02:29:23.776644", "exception": false, "start_time": "2022-03-10T02:29:23.710182", "status": "completed"} tags=[]
# ### 6.1 Bernoulli Naive Bayes
#
# BernoulliNB implements the naive Bayes algorithm for data that is distributed according to multivariate Bernoulli distributions. This means that each feature is assumed to be a binary-valued (Bernoulli, boolean) variable. This is the structure of the metadata we are given in this contest so let's see how this works. You'd typically see this used in some Natural Language Processing tasks, but it could be useful in this unique case as well.
# * Learn more here: [SciKit Documentation - BernoulliNB](https://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.BernoulliNB.html#sklearn.naive_bayes.BernoulliNB)
# + papermill={"duration": 0.125765, "end_time": "2022-03-10T02:29:23.97243", "exception": false, "start_time": "2022-03-10T02:29:23.846665", "status": "completed"} tags=[]
#create the Bernoulli Naive Bayes model
BernoulliNB_clf = BernoulliNB()
#train the model
start = time.time()
BernoulliNB_clf.fit(x_train, y_train)
stop = time.time()
#predict the response for the test data
BernoulliNB_clf_pred = BernoulliNB_clf.predict(x_test)
#print the RMSE
print(f'Training time: {round((stop - start),3)} seconds')
BernoulliNB_clf_RMSE = math.sqrt(mean_squared_error(y_test, BernoulliNB_clf_pred))
print(f'BernoulliNB_clf_RMSE: {round(BernoulliNB_clf_RMSE,3)}')
# + papermill={"duration": 1.950366, "end_time": "2022-03-10T02:29:26.04453", "exception": false, "start_time": "2022-03-10T02:29:24.094164", "status": "completed"} tags=[]
#plot it
ActualvPredictionsGraph(y_test[0:50], BernoulliNB_clf_pred[0:50], "First 50 Actual v. Predicted")
ActualvPredictionsGraph(y_test, BernoulliNB_clf_pred, "All Actual v. Predicted")
#plot actual v predicted in histogram form
plt.figure(figsize=(12,4))
sns.histplot(BernoulliNB_clf_pred,color='r',alpha=0.3,stat='probability',kde=True)
sns.histplot(y_test,color='b',alpha=0.3,stat='probability', kde=True)
plt.legend(labels=['prediction','actual'])
plt.title('Actual v Predict Distribution')
plt.show()
# + papermill={"duration": 0.095357, "end_time": "2022-03-10T02:29:26.214243", "exception": false, "start_time": "2022-03-10T02:29:26.118886", "status": "completed"} tags=[]
#Now submit to the competition using the model:
test_df = pd.read_csv('../input/petfinder-pawpularity-score/test.csv')
x_test_submission = test_df.drop(['Id'],axis=1)
test_df['Pawpularity'] = BernoulliNB_clf.predict(x_test_submission)
submission_df = test_df[['Id','Pawpularity']]
submission_df.to_csv("submission.csv", index=False)
submission_df.head()
# + [markdown] papermill={"duration": 0.069277, "end_time": "2022-03-10T02:29:26.353353", "exception": false, "start_time": "2022-03-10T02:29:26.284076", "status": "completed"} tags=[]
# So far this is actually the worst RMSE result, but notice how the predictions from this model more closely match the actual distribution of Pawpularity scores. So this model is not just guessing values close to the mean to get a good evaluation score, but actually predicting Pawpularity values that more closely resembles the true human interaction with these pet pictures. For this reason, I like this model the most so far, even though it has the worst RMSE so far.
#
# **Naive Bayes Summary:**
# * **BernoulliNB_clf_RMSE:** 23.468
# + [markdown] papermill={"duration": 0.073031, "end_time": "2022-03-10T02:29:26.498554", "exception": false, "start_time": "2022-03-10T02:29:26.425523", "status": "completed"} tags=[]
# -------
# + [markdown] papermill={"duration": 0.071862, "end_time": "2022-03-10T02:29:26.64394", "exception": false, "start_time": "2022-03-10T02:29:26.572078", "status": "completed"} tags=[]
# ## 7: Ensemble Methods
# Ensemble methods can be quite useful in a number of applications. If you want to learn more about ensemble methods, bagging, boosting, and stacking, check out this great resource online: [Link](https://analyticsindiamag.com/basics-of-ensemble-learning-in-classification-techniques-explained/). Some of them like Random Forests can get quite slow to train on big datasets, but we this is a small enough dataset for them. For now, I'll just show Random Forests and a version of Gradient Boosting called Histogram-based Gradient Boosting Classification Tree based on LightGBM.
#
# * If you want to learn more about Ensemble Methods or try out different ones: [SciKit Learn Documentation - Ensemble Methods](https://scikit-learn.org/stable/modules/ensemble.html#ensemble)
# + [markdown] papermill={"duration": 0.071262, "end_time": "2022-03-10T02:29:26.788675", "exception": false, "start_time": "2022-03-10T02:29:26.717413", "status": "completed"} tags=[]
# ### 7.1 Random Forest
# The random forest ensemble is pretty much just making a bunch of decision trees and coming to consesus about how to get a single prediction from the many trees.
# * For more on the Random Forest and parameter tuning: [SciKit Learn Documentation - Random Forests](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html#sklearn.ensemble.RandomForestClassifier)
# + papermill={"duration": 0.253695, "end_time": "2022-03-10T02:29:27.112643", "exception": false, "start_time": "2022-03-10T02:29:26.858948", "status": "completed"} tags=[]
#create the Random Forest ensemble
RF_reg = RandomForestRegressor(n_estimators=50, max_depth=3)
#train the model
start = time.time()
RF_reg.fit(x_train, y_train)
stop = time.time()
#predict the response for the test data
RF_reg_pred = RF_reg.predict(x_test)
#print the RMSE
print(f'Training time: {round((stop - start),3)} seconds')
RF_reg_RMSE = math.sqrt(mean_squared_error(y_test, RF_reg_pred))
print(f'RF_reg_RMSE: {round(RF_reg_RMSE,3)}')
# + papermill={"duration": 2.015029, "end_time": "2022-03-10T02:29:29.198878", "exception": false, "start_time": "2022-03-10T02:29:27.183849", "status": "completed"} tags=[]
#plot it
ActualvPredictionsGraph(y_test[0:50], RF_reg_pred[0:50], "First 50 Actual v. Predicted")
ActualvPredictionsGraph(y_test, RF_reg_pred, "All Actual v. Predicted")
#plot actual v predicted in histogram form
plt.figure(figsize=(12,4))
sns.histplot(RF_reg_pred,color='r',alpha=0.3,stat='probability',kde=True)
sns.histplot(y_test,color='b',alpha=0.3,stat='probability', kde=True)
plt.legend(labels=['prediction','actual'])
plt.title('Actual v Predict Distribution')
plt.show()
# + papermill={"duration": 0.113884, "end_time": "2022-03-10T02:29:29.390194", "exception": false, "start_time": "2022-03-10T02:29:29.27631", "status": "completed"} tags=[]
#Now submit to the competition using the model:
test_df = pd.read_csv('../input/petfinder-pawpularity-score/test.csv')
x_test_submission = test_df.drop(['Id'],axis=1)
test_df['Pawpularity'] = RF_reg.predict(x_test_submission)
submission_df = test_df[['Id','Pawpularity']]
submission_df.to_csv("submission.csv", index=False)
submission_df.head()
# + [markdown] papermill={"duration": 0.078307, "end_time": "2022-03-10T02:29:29.55018", "exception": false, "start_time": "2022-03-10T02:29:29.471873", "status": "completed"} tags=[]
# The Random Forest isn't really any better than a simple decision tree in our case. It predicts in pretty much the same way by guessing everything has Pawpularity near the mean Pawpularity score.
# + [markdown] papermill={"duration": 0.075333, "end_time": "2022-03-10T02:29:29.70379", "exception": false, "start_time": "2022-03-10T02:29:29.628457", "status": "completed"} tags=[]
# ### 7.2 Histogram-based Gradient Boosting Regression
# This is pretty much **LightGBM** but you can use it in scikit learn! Because it's experimental, you have to call: from sklearn.experimental import enable_hist_gradient_boosting as well.
# * For more and parameter info: [SciKit Learn Documentation - Histogram-based Gradient Boosting Regression](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.HistGradientBoostingRegressor.html)
#
# + papermill={"duration": 1.071932, "end_time": "2022-03-10T02:29:30.853975", "exception": false, "start_time": "2022-03-10T02:29:29.782043", "status": "completed"} tags=[]
#create the Histogram-based Gradient Boosting Regression model
HistGB_reg = HistGradientBoostingRegressor()
#train the model
start = time.time()
HistGB_reg.fit(x_train, y_train)
stop = time.time()
#predict the response for the test data
HistGB_reg_pred = HistGB_reg.predict(x_test)
#print the RMSE
print(f'Training time: {round((stop - start),3)} seconds')
HistGB_reg_RMSE = math.sqrt(mean_squared_error(y_test, HistGB_reg_pred))
print(f'HistGB_clf_RMSE: {round(HistGB_reg_RMSE,3)}')
# + papermill={"duration": 2.334354, "end_time": "2022-03-10T02:29:33.281552", "exception": false, "start_time": "2022-03-10T02:29:30.947198", "status": "completed"} tags=[]
#plot it
ActualvPredictionsGraph(y_test[0:50], HistGB_reg_pred[0:50], "First 50 Actual v. Predicted")
ActualvPredictionsGraph(y_test, HistGB_reg_pred, "All Actual v. Predicted")
#plot actual v predicted in histogram form
plt.figure(figsize=(12,4))
sns.histplot(HistGB_reg_pred,color='r',alpha=0.3,stat='probability',kde=True)
sns.histplot(y_test,color='b',alpha=0.3,stat='probability', kde=True)
plt.legend(labels=['prediction','actual'])
plt.title('Actual v Predict Distribution')
plt.show()
# + papermill={"duration": 0.123077, "end_time": "2022-03-10T02:29:33.487825", "exception": false, "start_time": "2022-03-10T02:29:33.364748", "status": "completed"} tags=[]
#Now submit to the competition using the model:
test_df = pd.read_csv('../input/petfinder-pawpularity-score/test.csv')
x_test_submission = test_df.drop(['Id'],axis=1)
test_df['Pawpularity'] = HistGB_reg.predict(x_test_submission)
submission_df = test_df[['Id','Pawpularity']]
submission_df.to_csv("submission.csv", index=False)
submission_df.head()
# + [markdown] papermill={"duration": 0.103758, "end_time": "2022-03-10T02:29:33.690554", "exception": false, "start_time": "2022-03-10T02:29:33.586796", "status": "completed"} tags=[]
# Nice, so our ensemble methods worked, but not significantly better than simpler models. Also because this HistGB model is the last one in the doc that's what would be submitted, but better to have a separate notebook with only 1 model your actual submissions.
#
# **Ensemble Methods Summary:**
# * **RF_reg_RMSE:** 20.838
# * **HistGB_clf_RMSE:** 20.924
# + [markdown] papermill={"duration": 0.104758, "end_time": "2022-03-10T02:29:33.896038", "exception": false, "start_time": "2022-03-10T02:29:33.79128", "status": "completed"} tags=[]
# -------
# + [markdown] papermill={"duration": 0.095873, "end_time": "2022-03-10T02:29:34.100025", "exception": false, "start_time": "2022-03-10T02:29:34.004152", "status": "completed"} tags=[]
# ## 8: Conclusion
#
# We ran 7 different models, and none did particularly well! 😅
#
# **Summary of Models:**
#
# | Model | RMSE |
# |--------------------------------------------------|--------|
# | 4.1 Decision Tree Regression | 20.857 |
# | 4.2 Decision Tree Classification | 22.900 |
# | 5.1 Ordinary Least Square Regression | 20.827 |
# | 5.2 Ridge Regression | 20.827 |
# | 6.1 Bernoulli Naive Bayes Classification | 23.468 |
# | 7.1 Random Forest Regression | 20.838 |
# | 7.2 Histogram-based Gradient Boosting Regression | 20.924 |
#
#
# These appear to suck because the metadata is just not really predictive of the target Pawpularity. The fact that all the models score pretty similarly even when parameters are modified supports this idea. All the models are pretty much just guessing the mean or close to it in an attempt to minimize errors. You could probably sit here all day trying to find the perfect parameter settings for these or other models, but it will never produce incredible results. In this case, the data and the target are the problem - not the models chosen. From this we can conclude that if there is in fact a way to build better models for pawpularity, it will involve using the images and not this metadata.
#
# I'll post a Tutorial Part 3 soon which will cover how to build models using the images and hopefully see some lower RMSE scores!
#
# ### Hope this helped!
| pawpularity-contest/tutorial-part-2-model-building-using-the-metadata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import string
import re
import datetime
import collections
import math
import heapq
import numpy as np
from tqdm import tqdm
import pickle
import operator
# ## marks lists
#
# +
punctuation_marks = [')','(','>','<',"؛","،",'{','}',"؟",':',"-", '»', '"', '«', '[', ']','"','+','=','?']
marks = ['/','//', '\\','|','!', '%', '&','*','$', '#','؟', '*','.','_' ]
alphabet_string_lower = string.ascii_lowercase
alphabet_string_upper = string.ascii_uppercase
english_char = list(alphabet_string_lower) + list(alphabet_string_upper)
sep_list = [" ", '\xad', '\u200e','\u200f', '\u200d', '\u200d', '\u200d'] + marks
# stop_words1 =['به', 'و', 'در', 'با', 'این', 'شد', 'را', 'که', 'از', 'که', 'این', 'با', 'است', 'برای', 'آن', 'یک', 'خود', 'تا', 'کرد', 'بر', 'هم', 'نیز', 'هزار', 'ریال', 'بود']
stop_words = ["برای", "پس", "سپس", "تا", "از", "که", "و", "به", "را", "با", "بر", "در", "اینکه", "این", "ان", "چرا",
"شاید", "انها", "چون", "انطور", "اینطور", "انچه", "آخ", "آخر", "آخرها", "اخه", "آره", "آری", "انان",
"اگر", "درباره", "خیلی", "توی", "بلکه", "بعضی", "بعدا", "باید", "البته", "الان", "اصلا", "اساسا",
"احتمالا", "اخیرا", "ایا", "اهان", "امرانه", "ان گاه", "انرا", "انقدر", "اساسا", "بنابراین", "هرحال",
"بیانکه", "بهقدری", "بیشتر", "کمتر", "تر", "ترین", "تو", "چند", "چندین", "چه", "خیلی", "دیگر", "دیگه",
"طبیعتا", "عمدا", "گاهی", "نیز", "او", "تو", "من", "ما", "طور", "دو", "هر", "همه", "ایم", "اند", "اید",
"است", "هست", "زدگی", "خصوصا", "ن", "ب", "کسی", "چیزی", "بالاخره", "وقتی", "زمانی", "می"]
# -
# ## stop_words
# +
# stop_word_data_20k = ['', 'پیام', 'خواه','دار', 'شده', '۵', 'گزار', 'ها', 'گیر', 'قرار', 'انت', '۴', 'حضور', 'وی', 'می', 'ان', 'بین', 'سال', '۷', 'گفت', 'کار', 'ی', 'کرده', 'اما', 'ادامه', 'شو', 'روز', 'کن', 'گذشته', 'داشت', 'های', 'باز', 'انجام', 'داد', '۳', 'باید', 'اینکه', 'پیش', 'بیش', 'دو', 'باش', 'توان', 'دیگر', '۲', 'عنو', 'امروز', '۹', '۸', 'پای', 'مورد', 'صورت', 'مردم', 'ب', 'هر', 'ما', 'ایر', 'رو', 'اظهار', '۱', 'افزود', '–', '۶', 'ا', 'ر', 'ث', '۰', 'ش', 'د', 'ژ', 'ک', 'ع', 'ه', 'ج', 'ز', 'م', 'ت', '٢', '٣', '٧', 'ق', 'ل', 'ن', '@', ',', '\u2066', '٩', '٤', '٥', '٨', '١', '٦', 'س', 'ص', 'ء', 'ف', "'", 'ö', 'گ', '》', 'ط', 'پ', '…', 'خ', 'ê', 'ح', '\u2067', '\u2069', 'ۗ', '•', '٪', '\u200b', '×', 'ۚ', 'ۖ', 'چ', '"', '\u2063', 'ذ', '\uf0d8', '۔', 'ü', 'ﻭ', 'غ', '️', 'ﷲ', '\u202c', '\u202a', 'ﻫ', 'ۙ', 'ض', '·', '¬', ';', 'خبر گزاری','گزازش','خبرنگار',]
# # stop_word_data_17k = ['', 'درصد', 'وی', 'کن', 'توان', 'پیام', 'ما', 'گفت', 'دولت', 'خواه', 'باید', 'اینکه','ان', 'تولید', 'دار', 'اگر', 'هستند', , 'شو', 'پیش', 'وجود','ایر', 'شده', 'کار', 'برخی', 'رو', 'بین', 'توسط', 'گیر', 'نظر', 'باش', 'همچنین', 'سال', 'دو', 'داشت', 'انجام', 'افزای', 'اما', 'هر', 'حال', 'قرار', 'مورد', 'افزود', 'گرفته', 'انها', 'انت', 'دلیل', 'ب', 'ادامه', 'می', 'شرکت', 'اساس', 'پای', 'گزار', 'کرده', 'ساز', 'بخش', 'بیش', 'داد', 'موضوع', 'روز', 'یا', 'شرایط', 'بوده', 'همین', 'کاهش', 'گذشته', 'توجه', 'یکی', 'امروز', 'ماه', 'زار', 'اقدام', '۵', 'داشته', '۲', 'عنو', 'داده', 'های', 'دیگر', 'نسبت', 'بیان', 'همه', '۸', 'اعلام', 'نقل', 'پس', '۷', 'میلیون', '۹', 'ر', '۶', '۴', '۳', 'ه', '۰', 'گروه', 'اشاره', '۱۳۹۹', '۱', 'ج', 'مردم', 'استفاده', 'ی', 'ت', '٥', '٢', '١', '٣', '٦', 'د', '–', 'ژ', 'ص', 'ع', 'م', 'پ', 'ا', '٤', 'ش', '٧', '٩', 'ک', 'ز', 'ف', 'ح', '’', 'ن', ',', '٨', "'", 'چ', '٠', '•', '·', '\u200b', 'غ', '…', 'ل', 'خ', 'ذ', 'ث', '\uf0d8', 'س', '@', 'ق', 'ء', 'ط', 'گ', ';', '٬', 'ۗ', '\uf0fc', '—', 'ñ', 'á', 'ﻫ', '×', '−', '±', '"', '﴾', '°', 'ﻭ', '🔹', 'ۚ', 'ۖ', '\u202b', 'ş', 'é', 'ة', 'ۙ']
# stop_word_data_11k = ['', 'پیام', 'اینکه', 'ایر', 'شده', 'کن', 'باش', 'کشور', 'های', 'کار', 'روز', 'اعلام', 'گزار', 'همچنین', 'سال', 'کرده', 'داد', 'انت', 'عنو', '۳', 'حضور', 'خواه', 'وی', 'افزود', 'گفت', 'بیان', 'توان', 'شو', 'بخش', 'باید', 'دار', 'داشت', 'ان', 'توجه', 'مورد', 'دو', 'انجام', '۲', 'رو', 'هر', 'یا', 'صورت', 'هستند', 'اما', 'ی', '٢', 'گذشته', '۴', 'قرار', 'بیش', 'دیگر', 'ادامه', 'امروز', '۰', '۶', '۵', '۹', '۷', '۸', '۱', 'ا', '٣', 'ب', 'س', '–', '٨', 'ج', '٧', 'پ', "'", 'ز', 'ر', 'ه', 'ع', 'ء', 'م', '\u202c', 'ش', '٦', 'ن', '٩', '@', 'ۚ', '٤', 'د', 'ص', 'ک', '\u202b', 'ث', 'چ', 'ل', 'ط', 'ح', 'ف', 'ت', '…', '٥', '\u2067', '١', 'ذ', 'ق', '—', '٠', ',', 'غ', '\ufeff', '٪', '"', '\u2069', '•', 'ۗ', '’', 'ژ', 'خ', 'и', 'О', 'с', 'б', 'Я', 'С', 'в', '۔', '·','بر','تا','در','به','از', 'گ', 'Ö', 'é', ';', '\u206b', '》', '⠀', '\u202a', 'ۖ', '❇', 'ﻫ'] + stop_words1
# stop_words = set(stop_word_data_20k + stop_word_data_11k)
# -
# ## regex patterns, prefixe list and postfix list
# +
raw_postfix = [
('[\u200c](تر(ین?)?|گری?|های?)' , ''),
(r'(تر(ین?)?|گری?)(?=$)' , ''), #تر ، ترین، گر، گری در آخر کلمه
# (r'(?<=[^ او])(م|ت|ش|مان|تان|شان|ی)$' , ''), # حذف شناسه های مالکیت و فعل در آخر کلمه
(r'(?<=[^ او])(م|ت|ش|مان|تان|شان)$' , ''), # حذف شناسه های مالکیت و فعل در آخر کلمه
(r'(ان|ات|گی|گری|های)$' , ''), #ان، ات، ها، های آخر کلمه
]
raw_arabic_notation = [
# remove FATHATAN, DAMMATAN, KASRATAN, FATHA, DAMMA, KASRA, SHADDA, SUKUN
('[\u064B\u064C\u064D\u064E\u064F\u0650\u0651\u0652]', ''),
]
raw_long_letters = [
(r' +', ' '), # remove extra spaces
(r'\n\n+', '\n'), # remove extra newlines
(r'[ـ\r]', ''), # remove keshide, carriage returns
]
raw_half_space = [
('[\u200c]', ''),
]
# -
# ## verbs, mokassar,morakab list
# +
present_roots =['توان','باش','رو','بر','یاور', 'یانداز', 'یای','یاندیش','بخش','باز','خر','بین','شنو','دار','دان','رسان','شناس','گو','گذار','یاب','لرز','ساز','شو','نویس','خوان','کاه','گیر','خواه','کن' ]
past_roots = ['توانست','بود','کرد','اورد','انداخت','امد','خرید','باخت','برد','رفت','اندیشید','بخشید','دید','شنید','داشت','دانست','رساند','شناخت','گفت','گذشت','یافت','لرزید','ساخت','شد','نوشت','خواند','کاست','گرفت','خواست']
all_verbs_roots = present_roots + past_roots
empty_list = ['','','','','','','']
verb_prefix = ['نمی', 'می','ن','ب',"" ]
present_verb_postfix = ['م','ی','د','ید','ند','یم']
past_verb_postfix = ['ایم','اید','ای','ام','اند']
past_verb_postfix2 = ['م','ی','ید','ند','یم']
present_verbs = []
past_verbs = []
all_verbs = {}
for pref in verb_prefix:
for present_root, past_root in zip(present_roots, past_roots):
for post in past_verb_postfix2:
all_verbs[pref + past_root+post] = past_root
for post in past_verb_postfix:
all_verbs[past_root+'ه'+post] = past_root
for post in present_verb_postfix:
all_verbs[pref+present_root+post] = present_root
words_list = ['ادب', 'آداب', 'طرف', 'اطراف', 'حقیقت', 'حقایق', 'موج', 'امواج', 'ادیب', 'ادبا', 'عمق', 'اعماق', 'خزینه', 'خزائن', 'مرکز', 'مراکز', 'اثر', 'آثار', 'عالم', 'علما', 'خبر', 'اخبار', 'موقع', 'مواقع', 'اسیر', 'اسرا', 'علم', 'علوم', 'دوره', 'ادوار', 'مصرف', 'مصارف', 'اسم', 'اسامی', 'علامت', 'علائم', 'دین', 'ادیان', 'معرفت', 'معارف', 'اسطوره', 'اساطیر', 'علت', 'علل', 'دفتر', 'دفاتر', 'مبحث', 'مباحث', 'امیر', 'امرا', 'عقیده', 'عقائد', 'ذخیره', 'ذخایر', 'ماده', 'مواد', 'امر', 'اوامر', 'عمل', 'اعمال', 'رفیق', 'رفقا', 'مذهب', 'مذاهب', 'امام', 'ائمه', 'عید', 'اعیاد', 'رای', 'آرا', 'مصیبت', 'مصائب', 'اصل', 'اصول', 'عنصر', 'عناصر', 'رسم', 'رسوم', 'معبد', 'معابد', 'افق', 'آفاق', 'عاطفه', 'عواطف', 'رابطه', 'روابط', 'مسجد', 'مساجد', 'بیت', 'ابیات', 'عضو', 'اعضا', 'رمز', 'رموز', 'معبر', 'معابر', 'تاجر', 'تجار', 'عبارت', 'عبارات', 'رجل', 'رجال', 'مظهر', 'مظاهر', 'تصویر', 'تصاویر', 'عجیب', 'عجایب', 'رقم', 'ارقام', 'منظره', 'مناظر', 'جد', 'اجداد', 'فقیه', 'فقها', 'زاویه', 'زوایا', 'مرض', 'امراض', 'جانب', 'جوانب', 'فن', 'فنون', 'زعیم', 'زعما', 'مورد', 'موارد', 'جزیره', 'جزایر', 'فکر', 'افکار', 'سانحه', 'سوانح', 'مرحله', 'مراحل', 'جبل', 'جبال', 'فریضه', 'فرایض', 'سلطان', 'سلاطین', 'مفهوم', 'مفاهیم', 'جریمه', 'جرایم', 'فعل', 'افعال', 'شعر', 'اشعار', 'منبع', 'منابع', 'حادثه', 'حوادث', 'فقیر', 'فقرا', 'شاعر', 'شعرا', 'مکان', 'اماکن', 'حشم', 'احشام', 'قاعده', 'قواعد']
mokassar_dict ={}
for i in range(0,len(words_list),2):
mokassar_dict[words_list[i+1]] = words_list[i]
# -
# ## tokenizer
# +
def split(txt, seps):
default_sep = seps[0]
# we skip seps[0] because that's the default separator
for sep in seps[1:]:
txt = txt.replace(sep, default_sep)
return [i.strip() for i in txt.split(default_sep)]
#get doc and return (word,doc id)
def tokenizer(text):
for l in english_char:
text = text.replace(l,"")
word_list = split(text.strip().replace("\n"," "), sep_list)
word_list = [x for x in word_list if not x.startswith("ir") and not x.startswith("NID")]
return word_list
# -
# ## normalizer
# +
def remove_punctuation_marks(word_list):
# word_list = list(word_list)
for i in range(len(word_list)):
for mark in punctuation_marks:
word_list[i] = word_list[i].replace(mark,"")
return word_list
def edit_long_letters(word_list):
patterns_compiled = [(re.compile(x[0]), x[1]) for x in raw_long_letters]
for pattern, rep in patterns_compiled:
for i in range(len(word_list)):
word_list[i] = pattern.sub(rep, word_list[i])
return word_list
def remove_arabic_notation(word_list):
patterns_compiled = [(re.compile(x[0]), x[1]) for x in raw_arabic_notation]
for pattern, rep in patterns_compiled:
for i in range(len(word_list)):
word_list[i] = pattern.sub(rep, word_list[i])
return word_list
def create_translation_table(src_list, dst_list):
return dict((ord(a), b) for a, b in zip(src_list, dst_list)) #map src unicode to dst char
def char_digit_Unification(word_list):
word_list = [x for x in word_list if not x.isnumeric() or (x.isnumeric() and float(x)<3000)]
translation_src, translation_dst = ' ىكي“”', ' یکی""'
translation_src += 'ئ0123456789أإآ%'
translation_dst += 'ی۰۱۲۳۴۵۶۷۸۹ااا٪'
translations_table = create_translation_table(translation_src, translation_dst)
for i in range(len(word_list)):
word_list[i] = word_list[i].translate(translations_table)
return word_list
def remove_mokassar(word_list):
for i in range(len(word_list)):
word_list[i] = mokassar_dict.get(word_list[i],word_list[i])
return word_list
def verb_Steaming(word_list):
for i in range(len(word_list)):
word_list[i] = all_verbs.get(word_list[i],word_list[i])
return word_list
def remove_postfix(word_list):
patterns_compiled = [(re.compile(x[0]), x[1]) for x in raw_postfix]
for pattern, rep in patterns_compiled:
for i in range(len(word_list)):
if len(word_list[i])>4 and not (word_list[i] in all_verbs_roots):
word_list[i] = pattern.sub(rep, word_list[i])
return word_list
def remove_prefix(word_list):
starts =['بی' , 'نا', 'با']
for i in range(len(word_list)):
for start in starts:
if word_list[i].startswith(start) and len(word_list[i])>4:
word_list[i] = word_list[i][len(start):]
break
return word_list
def morakab_Unification(word_list):
patterns_compiled = [(re.compile(x[0]), x[1]) for x in raw_half_space]
for pattern, rep in patterns_compiled:
for i in range(len(word_list)):
word_list[i] = pattern.sub(rep, word_list[i])
return word_list
#all to gether
def normalizer(word_list):
#remove punc
for i in range(len(word_list)):
for mark in punctuation_marks:
word_list[i] = word_list[i].replace(mark,"")
#edit long letters
patterns_compiled = [(re.compile(x[0]), x[1]) for x in raw_long_letters]
for pattern, rep in patterns_compiled:
for i in range(len(word_list)):
word_list[i] = pattern.sub(rep, word_list[i])
#remove mokassar
for i in range(len(word_list)):
word_list[i] = mokassar_dict.get(word_list[i],word_list[i])
#remove_arabic_notation
patterns_compiled = [(re.compile(x[0]), x[1]) for x in raw_arabic_notation]
for pattern, rep in patterns_compiled:
for i in range(len(word_list)):
word_list[i] = pattern.sub(rep, word_list[i])
#char_digit_Unification
word_list = [x for x in word_list if not x.isnumeric() or (x.isnumeric() and float(x)<3000)]
translation_src, translation_dst = ' ىكي“”', ' یکی""'
translation_src += 'ئ0123456789أإآ%'
translation_dst += 'ی۰۱۲۳۴۵۶۷۸۹ااا٪'
translations_table = create_translation_table(translation_src, translation_dst)
for i in range(len(word_list)):
word_list[i] = word_list[i].translate(translations_table)
#verb_Steaming
for i in range(len(word_list)):
word_list[i] = all_verbs.get(word_list[i],word_list[i])
#remove_prefix
starts =['بی' , 'نا', 'با']
for i in range(len(word_list)):
for start in starts:
if word_list[i].startswith(start) and len(word_list[i])>4:
word_list[i] = word_list[i][len(start):]
break
#remove_postfix
patterns_compiled = [(re.compile(x[0]), x[1]) for x in raw_postfix]
for pattern, rep in patterns_compiled:
for i in range(len(word_list)):
if len(word_list[i])>4 and not (word_list[i] in all_verbs_roots):
word_list[i] = pattern.sub(rep, word_list[i])
#morakab_Unification
patterns_compiled = [(re.compile(x[0]), x[1]) for x in raw_half_space]
for pattern, rep in patterns_compiled:
for i in range(len(word_list)):
word_list[i] = pattern.sub(rep, word_list[i])
for word in word_list:
if word in stop_words:
word_list.remove(word)
return [x for x in word_list if len(x) >= 2]
# -
# ## read data
# %%time
all_data = pd.DataFrame()
pd_list = []
pd_list .append(pd.read_excel('IR00_3_11k News.xlsx', engine = 'openpyxl'))
pd_list .append(pd.read_excel('IR00_3_17k News.xlsx', engine = 'openpyxl'))
pd_list .append(pd.read_excel('IR00_3_20k News.xlsx', engine = 'openpyxl'))
# pd_list .append(pd.read_excel('3.xlsx', engine = 'openpyxl'))
# pd_list .append(pd.read_excel('2.xlsx', engine = 'openpyxl'))
# pd_list .append(pd.read_excel('1.xlsx', engine = 'openpyxl'))
for df in pd_list:
all_data = all_data.append(df,ignore_index=True)
all_data["topic"].replace({"political": "politics", "sport": "sports"}, inplace=True)
all_data = all_data[all_data.content != 'None\n\n']
all_data = all_data.reset_index(drop=True)
all_data.index +=1
all_data['id'] = all_data.index
contents = all_data['content'].to_list()
doc_ids = all_data['id'].to_list()
doc_url = all_data['url']
docs_topic = all_data['topic']
docs_num = len(doc_ids)
# ## create inverted index and change docs content to bag of word
# +
def create_dictionary(contents, doc_ids):
for content, id in zip(contents,doc_ids):
doc_tokens = tokenizer(content)
doc_normalized_tokens = normalizer(doc_tokens)
term_freq_dict = dict(collections.Counter(doc_normalized_tokens))
for term, freq in term_freq_dict.items():
term_freq_dict[term] = (1 + math.log((freq), 10))
contents[id-1] = term_freq_dict
for term,freq in term_freq_dict.items():
term_postings = inverted_index.get(term,{})
term_postings[id] = (1 + math.log((freq), 10))
inverted_index[term] = term_postings
def calculate_tf_idf_weight(contents):
docs_len = []
for doc in contents:
length = 0
for term in doc.keys():
df = len(inverted_index[term])
idf = math.log(docs_num/df,10)
doc[term] *= idf
length += doc[term]
docs_len.append( math.sqrt(length))
return docs_len
# +
# %%time
inverted_index ={}
create_dictionary(contents, doc_ids)
docs_len = calculate_tf_idf_weight(contents)
# -
# ## save inverted_index and docs_len
# +
# # %%time
# file_inverted_index = open('ph3_KNN_inverted_index.obj', 'wb')
# pickle.dump(inverted_index, file_inverted_index)
# file_inverted_index.close()
# inverted_index = {}
# +
# file_docs_len = open('ph3_KNN_docs_len.obj', 'wb')
# pickle.dump(docs_len, file_docs_len)
# file_docs_len.close()
# docs_len = {}
# +
# file_contents = open('ph3_KNN_contents.obj', 'wb')
# pickle.dump(contents, file_contents)
# file_contents.close()
# contents =[]
# +
# inverted_index['سلام']
# docs_len =[]
# -
# ## read inverted_index and docs_len
# %%time
readed = open('ph3_KNN_inverted_index.obj', 'rb')
inverted_index = pickle.load(readed)
readed = open('ph3_KNN_docs_len.obj', 'rb')
docs_len = pickle.load(readed)
readed = open('ph3_KNN_contents.obj', 'rb')
contents = pickle.load(readed)
# ## read unlabeled data
unlabeled_inverted_index = {}
unlabeled_df = pd.read_excel('IR_Spring2021_ph12_7k.xlsx', engine = 'openpyxl')
unlabeled_contents = unlabeled_df['content'].to_list()
unlabeled_doc_ids = unlabeled_df['id'].to_list()
unlabeled_doc_url = unlabeled_df['url']
unlabeled_docs_num = len(doc_ids)
# ## create unlabeled inverted_index and update unlabeled_contents
# +
def create_unlabaled_dictionary(unlabeled_contents, unlabeled_doc_ids):
for content, id in zip(unlabeled_contents,unlabeled_doc_ids):
doc_tokens = tokenizer(content)
doc_normalized_tokens = normalizer(doc_tokens)
term_freq_dict = dict(collections.Counter(doc_normalized_tokens))
for term, freq in term_freq_dict.items():
term_freq_dict[term] = (1 + math.log((freq), 10))
unlabeled_contents[id-1] = term_freq_dict
for term,freq in term_freq_dict.items():
term_postings = unlabeled_inverted_index.get(term,{})
term_postings[id] = freq
unlabeled_inverted_index[term] = term_postings
def calculate_tf_idf_weight(unlabeled_contents):
unlabeled_docs_len = []
for doc in unlabeled_contents:
length = 0
for term in doc.keys():
df = len(inverted_index[term])
idf = math.log(docs_num/df,10)
doc[term] *= idf
length += doc[term]
unlabeled_docs_len.append( math.sqrt(length))
return unlabeled_docs_len
# -
# %%time
unlabeled_inverted_index = {}
create_unlabaled_dictionary(unlabeled_contents, unlabeled_doc_ids)
unlabeled_docs_len = calculate_tf_idf_weight(unlabeled_contents)
# ## save unlabeled data
# %%time
file_unlabeled_inverted_index = open('ph3_KNN_unlabeled_inverted_index.obj', 'wb')
pickle.dump(unlabeled_inverted_index, file_unlabeled_inverted_index)
file_unlabeled_inverted_index.close()
unlabeled_inverted_index = {}
file_unlabeled_docs_len = open('ph3_KNN_unlabeled_docs_len.obj', 'wb')
pickle.dump(unlabeled_docs_len, file_unlabeled_docs_len)
file_unlabeled_docs_len.close()
unlabeled_docs_len = {}
file_unlabeled_contents = open('ph3_KNN_unlabeled_contents.obj', 'wb')
pickle.dump(unlabeled_contents, file_unlabeled_contents)
file_unlabeled_contents.close()
unlabeled_contents =[]
# ## read unlabeled inverted_index and contents
# %%time
readed = open('ph3_KNN_unlabeled_inverted_index.obj', 'rb')
unlabeled_inverted_index = pickle.load(readed)
readed = open('ph3_KNN_unlabeled_docs_len.obj', 'rb')
unlabeled_docs_len = pickle.load(readed)
readed = open('ph3_KNN_unlabeled_contents.obj', 'rb')
unlabeled_contents = pickle.load(readed)
# ## KNN
# +
def extract_top_k(doc_terms, k):
similarity = {}
docs_score = [0] * docs_num
for term,Wt_q in doc_terms.items():
idf = math.log(docs_num/len(inverted_index[term]))
for doc_id, freq in inverted_index[term].items():
docs_score[doc_id-1] += (Wt_q * freq)*idf
docs_score = list((np.array(docs_score))/(np.array(docs_len)))
for doc_id in doc_ids:
if docs_score[doc_id-1] != 0:
similarity[doc_id] = docs_score[doc_id-1]
heap = [(-value, key) for key,value in similarity.items()]
largest = heapq.nsmallest(k, heap)
top_k = [(y, -x) for x,y in largest]
return top_k
# -
def KNN(unlabeled_doc_ids,unlabeled_contents, k ):
categories = {"culture":[], "economy":[], "sports":[], "politics":[], "health":[]}
for doc_id in tqdm(unlabeled_doc_ids):
doc_terms = unlabeled_contents[doc_id-1]
top_k = extract_top_k(doc_terms, k)
labels = [docs_topic[x] for x,_ in top_k]
categories[max(set(labels), key=labels.count)].append(doc_id)
return categories
# %%time
cat = KNN(unlabeled_doc_ids, unlabeled_contents, 5)
for categ in cat:
print(categ," ", len(cat[categ]))
# ## save labeled docs
# # %%time
# file_labeled_7kdocs = open('ph3_labeled_7kdocs.obj', 'wb')
# pickle.dump(cat, file_labeled_7kdocs)
# file_labeled_7kdocs.close()
# ## load labeled docs
# %%time
readed = open('ph3_labeled_7kdocs.obj', 'rb')
categories = pickle.load(readed)
# ## query responding
def vectorize_query(query_str):
query_tokens = tokenizer(query_str)
query_normalized_tokens = normalizer(query_tokens)
term_freq_dict = dict(collections.Counter(query_normalized_tokens))
query_terms = {}
for term,freq in term_freq_dict.items():
if term in inverted_index.keys() and term not in stop_words:
query_terms[term] = 1+math.log(freq,10)
print(query_terms)
return query_terms
def score_docs(query_terms, cat_docs):
docs_score = {x:0 for x in cat_docs}
for term,Wt_q in query_terms.items():
for doc_id in cat_docs:
docs_score[doc_id] += Wt_q * unlabeled_contents[doc_id-1].get(term, 0)
for doc_id in cat_docs:
docs_score[doc_id] /= unlabeled_docs_len[doc_id-1]
return docs_score
def extract_top_k(docs_similarity, k):
heap = [(-value, key) for key,value in docs_similarity.items()]
largest = heapq.nsmallest(k, heap)
top_k = [(y, -x) for x,y in largest]
return top_k
def query_responding():
query = input("chose one category between [culture, economy, sports, politics, health] \n enter query like this cat:text : ")
category = query.split(':')[0]
query = query.split(':')[1]
query_terms = vectorize_query(query)
cat_docs = categories[category]
docs_similarity = score_docs(query_terms, cat_docs)
a = datetime.datetime.now()
top_docs = extract_top_k(docs_similarity, 50)
b = datetime.datetime.now()
print("{:<5} result in {} ms\nid \t(score)\t\t -> link\n".format(len(top_docs),1000*(b-a).total_seconds()))
for doc_id, score in top_docs:
if score>0:
print( "{:<5}({}) -> {} ".format(doc_id,"-", doc_url[doc_id]))
query_responding()
| Phase3- speed up query responding by K-means clustering and KNN/IR_phase3(KNN).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
# Create a MP4 Movie
# ==================
#
# Create an animated MP4 movie of a rendering scene.
#
# ::: {.note}
# ::: {.admonition-title}
# Note
# :::
#
# This movie will appear static since MP4 movies will not be rendered on a
# sphinx gallery example.
# :::
#
# +
import pyvista as pv
import numpy as np
filename = "sphere-shrinking.mp4"
mesh = pv.Sphere()
mesh.cell_arrays["data"] = np.random.random(mesh.n_cells)
plotter = pv.Plotter()
# Open a movie file
plotter.open_movie(filename)
# Add initial mesh
plotter.add_mesh(mesh, scalars="data", clim=[0, 1])
# Add outline for shrinking reference
plotter.add_mesh(mesh.outline_corners())
plotter.show(auto_close=False) # only necessary for an off-screen movie
# Run through each frame
plotter.write_frame() # write initial data
# Update scalars on each frame
for i in range(100):
random_points = np.random.random(mesh.points.shape)
mesh.points = random_points * 0.01 + mesh.points * 0.99
mesh.points -= mesh.points.mean(0)
mesh.cell_arrays["data"] = np.random.random(mesh.n_cells)
plotter.add_text(f"Iteration: {i}", name='time-label')
plotter.write_frame() # Write this frame
# Be sure to close the plotter when finished
plotter.close()
| examples/02-plot/movie.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # WeatherPy
# ----
#
# #### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import random
from scipy.stats import linregress
from pprint import pprint
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# -
# ## Generate Cities List
# +
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
# -
indx_list = [*range(1,525)]
city_list=[]
random.shuffle(indx_list)
for j in range(0,len(indx_list)):
city_list.append(cities[indx_list[j]].capitalize())
weather_api_key
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
query_url = f"{url}appid={weather_api_key}&units={units}&q="
query_url
response=requests.get(query_url+'yjaris').json()
a1=response.keys()
list(a1)
response
# ### Perform API Calls
# * Perform a weather check on each city using a series of successive API calls.
# * Include a print log of each city as it'sbeing processed (with the city number and city name).
#
temp=[]
wind=[]
hum=[]
cld=[]
crd=[]
for city in city_list:
try:
response = requests.get(query_url + city).json()
print(city,response['id'])
temp.append(response['main']['temp'])
hum.append(response['main']['humidity'])
wind.append(response['wind'])
cld.append(response['clouds'])
crd.append(response['coord'])
except:
print("city not found")
pass
# ### Convert Raw Data to DataFrame
# * Export the city data into a .csv.
# * Display the DataFrame
data = pd.DataFrame({'City':[ city_list[i] for i in range(0,len(city_list))],
'Coord':[crd[j] for j in range(0,len(crd))]
} )
# ### Plotting the Data
# * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
# * Save the plotted figures as .pngs.
# #### Latitude vs. Temperature Plot
# #### Latitude vs. Humidity Plot
# #### Latitude vs. Cloudiness Plot
# #### Latitude vs. Wind Speed Plot
# ## Linear Regression
# OPTIONAL: Create a function to create Linear Regression plots
# Create Northern and Southern Hemisphere DataFrames
# #### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
# #### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
# #### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# #### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# #### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# #### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# #### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# #### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
| Solutions/.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import spacy
from os import path
from sklearn.model_selection import StratifiedShuffleSplit
# -
DATA_DIR = "../data/meli"
nlp_es = spacy.load("es")
nlp_pt = spacy.load("pt")
train_data = pd.read_csv(path.join(DATA_DIR, "train.csv.gz"))
test_data = pd.read_csv(path.join(DATA_DIR, "test.csv"))
train_data["normalized_title"] = train_data.title.str.lower()
train_data["normalized_title"] = train_data.normalized_title.str.replace("\s+", " ")
test_data["normalized_title"] = test_data.title.str.lower()
test_data["normalized_title"] = test_data.normalized_title.str.replace("\s+", " ")
def process_title(row):
if row.language == "portuguese":
doc = nlp_pt(row.normalized_title, disable=["parser", "ner"])
else:
doc = nlp_es(row.normalized_title, disable=["parser", "ner"])
return [(t.text, t.pos_) for t in doc]
def get_list_values(series, column):
for reg in series:
yield [v[column] for v in reg]
# +
train_data["tokens"] = train_data.apply(process_title, axis=1)
train_data["words"] = [r for r in get_list_values(train_data.tokens, 0)]
train_data["pos"] = [r for r in get_list_values(train_data.tokens, 1)]
# -
train_data[["title", "label_quality", "language", "words", "pos", "split", "category"]].to_parquet(
DATA_DIR + "/train_tokenized.parquet", index=None
)
# +
test_data["tokens"] = test_data.apply(process_title, axis=1)
test_data["words"] = [r for r in get_list_values(test_data.tokens, 0)]
test_data["pos"] = [r for r in get_list_values(test_data.tokens, 1)]
# -
test_data[["id", "title", "language", "words", "pos"]].to_parquet(
DATA_DIR + "/test_tokenized.parquet", index=None)
# +
reliable_indices = train_data[train_data.label_quality == "reliable"].index
valid_reliable_categories = train_data.loc[reliable_indices]["category"].value_counts()
valid_reliable_categories = set(valid_reliable_categories[valid_reliable_categories >= 5].index)
valid_reliable_indices = train_data[(train_data.label_quality == "reliable") &
(train_data.category.isin(valid_reliable_categories))].index
unreliable_indices = train_data[train_data.label_quality == "unreliable"].index
# +
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.05, random_state=42)
train_index_reliable, dev_index_reliable = next(sss.split(train_data.loc[valid_reliable_indices],
train_data.loc[valid_reliable_indices]["category"]))
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.05, random_state=42)
train_index_unreliable, dev_index_unreliable = next(sss.split(train_data.loc[unreliable_indices],
train_data.loc[unreliable_indices]["category"]))
# +
train_index = np.hstack([
train_data.loc[valid_reliable_indices].iloc[train_index_reliable].index.values,
train_data.loc[unreliable_indices].iloc[train_index_unreliable].index.values
])
dev_index = np.hstack([
train_data.loc[valid_reliable_indices].iloc[dev_index_reliable].index.values,
train_data.loc[unreliable_indices].iloc[dev_index_unreliable].index.values
])
# -
train_data.loc[train_index, "split"] = "train"
train_data.loc[dev_index, "split"] = "dev"
train_data.split.fillna("dev", inplace=True)
train_data.category.value_counts()
train_data[train_data.split=="dev"].category.value_counts()
train_data[train_data.split=="dev"].groupby(["language", "label_quality"]).size()
train_data.to_parquet(path.join(DATA_DIR, "./train_tokenized.parquet"), index=None)
| archive/data_setup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Debug energy conservation implementation
#
# Why does it crash and what could we do about it?
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
from cbrain.imports import *
from cbrain.utils import *
from matplotlib.animation import FuncAnimation
from IPython.display import SVG, HTML
from utils import *
import matplotlib as mpl
DATA_DIR = '/scratch/05488/tg847872/debug/'
REF_DIR = '/scratch/05488/tg847872/fluxbypass_aqua/'
DT = 1800
# ## Why do we crash?
#df1 = xr.open_mfdataset(f'{DATA_DIR}*F001.*h1*', decode_times=False)
d3 = xr.open_mfdataset(f'{DATA_DIR}*debug3*h1*', decode_times=False)
d4 = xr.open_mfdataset(f'{DATA_DIR}*debug4*h1*', decode_times=False)
dref = xr.open_mfdataset(f'{REF_DIR}*h1*0000-01-0[1-2]*', decode_times=False)
d3 = df1
d1 = xr.open_mfdataset(f'{DATA_DIR}*_F001.*h1*', decode_times=False)
d4 = xr.open_mfdataset(f'{DATA_DIR}*debug4*h1*', decode_times=False)
d5 = xr.open_mfdataset(f'{DATA_DIR}*debug5*h1*', decode_times=False)
d6 = xr.open_mfdataset(f'{DATA_DIR}*debug6*h1*', decode_times=False)
fig, axes = plt.subplots(2, 3, figsize=(15, 10))
[d1[v].isel(time=slice(0, 48*3, 24)).mean(('lat', 'lon', 'lev')).plot(ax=ax, label='1')
for v, ax in zip(['NNTBP', 'NNQBP', 'NNQCBP', 'NNQIBP'], axes.flat)];
[d3[v].isel(time=slice(0, 48*3, 24)).mean(('lat', 'lon', 'lev')).plot(ax=ax, label='3')
for v, ax in zip(['NNTBP', 'NNQBP', 'NNQCBP', 'NNQIBP'], axes.flat)];
[d4[v].isel(time=slice(0, -2, 24)).mean(('lat', 'lon', 'lev')).plot(ax=ax, label='4')
for v, ax in zip(['NNTBP', 'NNQBP', 'NNQCBP', 'NNQIBP'], axes.flat)];
[d5[v].isel(time=slice(0, -2, 24)).mean(('lat', 'lon', 'lev')).plot(ax=ax, label='5')
for v, ax in zip(['NNTBP', 'NNQBP', 'NNQCBP', 'NNQIBP'], axes.flat)];
[d6[v].isel(time=slice(0, -2, 24)).mean(('lat', 'lon', 'lev')).plot(ax=ax, label='6')
for v, ax in zip(['NNTBP', 'NNQBP', 'NNQCBP', 'NNQIBP'], axes.flat)];
[dref[v].isel(time=slice(0, None, 24)).mean(('lat', 'lon', 'lev')).plot(ax=ax, label='ref')
for v, ax in zip(['TAP', 'QAP', 'QCAP', 'QIAP'], axes.flat)];
axes.flat[0].legend();
# QC and QI(!) increase very rapidly, basically from the start. Is this global or local?
fig, axes = plt.subplots(1, 2, figsize=(15, 4))
(d5.QIAP.isel(time=-2).mean('lev') - d5.QIAP.isel(time=1).mean('lev')).plot(cmap='bwr', ax=axes[0]);
d5.QIAP.isel(time=1).mean('lev').plot(cmap='jet', ax=axes[1]);
fig.suptitle('QI difference last time step to first');
fig, axes = plt.subplots(1, 2, figsize=(15, 4))
(df1.QIAP.isel(time=-2).mean('lon') - df1.QIAP.isel(time=1).mean('lon')).plot(cmap='bwr', yincrease=False, ax=axes[0]);
df1.QIAP.isel(time=1).mean('lon').plot(cmap='jet', ax=axes[1], yincrease=False);
fig.suptitle('QI difference last time step to first');
# So these are huge increases, particularly in regions where there shouldn't be much ice at all. This is right at the end of the run however when things have probably gotten off the rails already. The increase is happening from the start however. Maybe looking at the one day differences is more enlightening.
fig, axes = plt.subplots(1, 2, figsize=(15, 4))
(df1.QIAP.isel(time=48).mean('lev') - df1.QIAP.isel(time=1).mean('lev')).plot(cmap='bwr', ax=axes[0]);
df1.QIAP.isel(time=1).mean('lev').plot(cmap='jet', ax=axes[1]);
fig.suptitle('QI difference after one day');
fig, axes = plt.subplots(1, 2, figsize=(15, 4))
(df1.QIAP.isel(time=48).mean('lon') - df1.QIAP.isel(time=1).mean('lon')).plot(cmap='bwr', yincrease=False, ax=axes[0]);
df1.QIAP.isel(time=1).mean('lon').plot(cmap='jet', ax=axes[1], yincrease=False);
fig.suptitle('QI difference after one day');
fig, axes = plt.subplots(1, 2, figsize=(15, 4))
(df1.QIAP.isel(time=48).mean('lon') - dref.QIAP.isel(time=48).mean('lon')).plot(cmap='bwr', yincrease=False, ax=axes[0]);
(df1.QIAP.isel(time=48).mean('lev') - dref.QIAP.isel(time=48).mean('lev')).plot(cmap='bwr', ax=axes[1]);
fig.suptitle('QI difference to reference');
# So we are red mostly where there isn't much ice to begin with and then it seems at the tropopause above convection.
# ## Why are we getting this much ice and liquid water!?
#
# NNDQI and NNDQC are the only sources/sinks for ice and liquid!
fig, axes = plt.subplots(1, 4, figsize=(15, 4))
[d6[v].isel(time=slice(0, 48, 1)).mean(('lat', 'lon', 'lev')).plot(ax=ax, label='PH*')
for v, ax in zip(['PHQ', 'PHCLDLIQ', 'PHCLDICE'], axes.flat)]
[d6[v].isel(time=slice(0, 48, 1)).mean(('lat', 'lon', 'lev')).plot(ax=ax, label='NNDQ*')
for v, ax in zip(['NNDQ', 'NNDQC', 'NNDQI'], axes.flat)]
[d6[v].isel(time=slice(0, 48, 1)).mean(('lat', 'lon', 'lev')).plot(ax=ax, label='CNNDQ*')
for v, ax in zip(['CNNDQ', 'CNNDQC', 'CNNDQI'], axes.flat)]
[d6[v].isel(time=slice(0, 3, 1)).mean(('lat', 'lon')).plot(ax=axes[-1], label=v) for v in ['NNPRL', 'CNNPRL']]
axes[0].legend(); axes[-1].legend(); plt.tight_layout()
d4['PS'] = d4['NNPS']
dQCONV = vint(d4, 'PHQ', 1) + vint(d4, 'PHCLDLIQ', 1) + vint(d4, 'PHCLDICE', 1)
dQSRF = d4.NNLHF / L_V
d4.CNNPRL.isel(time=1).mean().values
d4.NNPRL.isel(time=1).mean().values
dQPREC = d4.CNNPRL
a = dQCONV.isel(time=1).values
b = dQSRF.isel(time=1).values - dQPREC.isel(time=1).values
diff = a-b
rel_diff = diff/(a+b)*2.
plt.scatter(a.flat, b.flat, c=rel_diff.flat, cmap='coolwarm', norm=mpl.colors.Normalize(vmin=-0.5,vmax=0.5), alpha=0.2);
plt.colorbar(shrink=0.7); plt.xlim(-2e-3, 2e-3); plt.ylim(-2e-3, 2e-3);
# Here we have a problem. The actual difference and the NN tendencies do not agree. In fact, the actual differences are much higher! Let's go hunting!
#
# My suspicion is that it's the qneg3 correction!
fig, axes = plt.subplots(1, 2, figsize = (15, 5))
(df1['NNDQI']*DT).isel(time=48, lon=0).plot(yincrease=False, ax=axes[0])
(df1['PHCLDICE']*DT - df1['NNDQI']*DT).isel(time=48, lon=0).plot(yincrease=False, ax=axes[1]);
fig, axes = plt.subplots(1, 2, figsize = (15, 5))
df1['NNQIBP'].isel(time=48, lon=0).plot(yincrease=False, ax=axes[0])
(df1['NNQIBP'] + df1['NNDQI']*DT).isel(time=48, lon=0).plot(yincrease=False, ax=axes[1], vmin=-1e-4, vmax=1e-4,
cmap='RdBu_r');
def get_neg(ds, var1, var2):
Qafter = ds[var1] + ds[var2]*DT
return -np.sum(np.minimum(Qafter.values, 0))
get_neg(df1.isel(time=48), 'NNQIBP', 'NNDQI'), get_neg(df1.isel(time=48), 'NNQCBP', 'NNDQC')
# So here is our problem. We are predicting drying that would cause the concentrations to be below zero. Why are we predicting this? Like one question is:
#
# - Does this happen that strongly also on the validation set offline?
# ## Check the NN offline
KERAS_DIR = '/home1/05488/tg847872/saved_models/'
m = keras.models.load_model(f'{KERAS_DIR}/F001_fbp_engy_cons_sample1_max_rs_deep.h5')
mean = np.loadtxt(f'{KERAS_DIR}/F001_fbp_engy_cons_sample1_max_rs_deep/inp_means.txt', delimiter=',')
std = np.loadtxt(f'{KERAS_DIR}/F001_fbp_engy_cons_sample1_max_rs_deep/inp_max_rs.txt', delimiter=',')
dref['NNTBP'] = dref['TAP'] - dref['TPHYSTND']*DT
dref['NNQBP'] = dref['QAP'] - dref['PHQ']*DT
dref['NNQCBP'] = dref['QCAP'] - dref['PHCLDLIQ']*DT
dref['NNQIBP'] = dref['QIAP'] - dref['PHCLDICE']*DT
dref['NNVBP'] = dref['VAP'] - dref['VPHYSTND']*DT
dref['NNPS'] = dref['PS']
dref['NNSOLIN'] = dref['SOLIN']
dref['SHFLX'].load(); dref['LHFLX'].load();
dref['NNSHF'] = dref['SHFLX'].copy()
dref['NNLHF'] = dref['LHFLX'].copy()
dref['NNSHF'].values[1:] = dref['SHFLX'].values[:-1]
dref['NNLHF'].values[1:] = dref['LHFLX'].values[:-1]
# Get predictions for 48th time step
inps = get_cb_inps(dref, 95, mean, std)
preds = m.predict(inps.reshape(154, -1).T, 4096).T.reshape(126, 64, 128); preds.shape
predDQI = preds[90:120] / L_V
dref['PHCLDICE'].load()
dref['PDQI'] = dref['PHCLDICE'].copy()
dref['PDQI'].values[95] = predDQI
fig, axes = plt.subplots(1, 2, figsize = (15, 5))
dref['PHCLDICE'].isel(time=95, lon=0).plot(yincrease=False, ax=axes[0])
dref['PDQI'].isel(time=95, lon=0).plot(yincrease=False, ax=axes[1]);
get_neg(dref.isel(time=95), 'NNQIBP', 'PDQI'), get_neg(dref.isel(time=95), 'NNQIBP', 'PHCLDICE')
# This suggests that this is also happening in the offline predictions, maybe not quite as bad. The raw output does look pretty random.
x = np.array([0.01, 2, 5, 2])
l = np.log(x); l
np.sum(l)
| notebooks/dev/debug-engy_cons.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## This notebook will help you train a vanilla Point-Cloud AE with the basic architecture we used in our paper.
# (it assumes latent_3d_points is in the PYTHONPATH and the structural losses have been compiled)
# +
import os.path as osp
from latent_3d_points.src.ae_templates import mlp_architecture_ala_iclr_18, default_train_params
from latent_3d_points.src.autoencoder import Configuration as Conf
from latent_3d_points.src.point_net_ae import PointNetAutoEncoder
from latent_3d_points.src.in_out import snc_category_to_synth_id, create_dir, PointCloudDataSet, \
load_all_point_clouds_under_folder
from latent_3d_points.src.tf_utils import reset_tf_graph
from latent_3d_points.src.general_utils import plot_3d_point_cloud
# -
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# Define Basic Parameters
# +
top_out_dir = '../data/' # Use to save Neural-Net check-points etc.
top_in_dir = '../data/shape_net_core_uniform_samples_2048/' # Top-dir of where point-clouds are stored.
experiment_name = 'single_class_ae'
n_pc_points = 2048 # Number of points per model.
bneck_size = 128 # Bottleneck-AE size
ae_loss = 'chamfer' # Loss to optimize: 'emd' or 'chamfer'
class_name = input('Give me the class name (e.g. "chair"): ').lower()
# -
# Load Point-Clouds
syn_id = snc_category_to_synth_id()[class_name]
class_dir = osp.join(top_in_dir , syn_id)
all_pc_data = load_all_point_clouds_under_folder(class_dir, n_threads=8, file_ending='.ply', verbose=True)
# Load default training parameters (some of which are listed beloq). For more details please print the configuration object.
#
# 'batch_size': 50
#
# 'denoising': False (# by default AE is not denoising)
#
# 'learning_rate': 0.0005
#
# 'z_rotate': False (# randomly rotate models of each batch)
#
# 'loss_display_step': 1 (# display loss at end of these many epochs)
# 'saver_step': 10 (# over how many epochs to save neural-network)
train_params = default_train_params()
encoder, decoder, enc_args, dec_args = mlp_architecture_ala_iclr_18(n_pc_points, bneck_size)
train_dir = create_dir(osp.join(top_out_dir, experiment_name))
# If you ran the above lines, you can reload a saved model like this:
conf = Conf(n_input = [n_pc_points, 3],
loss = ae_loss,
training_epochs = train_params['training_epochs'],
batch_size = train_params['batch_size'],
denoising = train_params['denoising'],
learning_rate = train_params['learning_rate'],
train_dir = train_dir,
loss_display_step = train_params['loss_display_step'],
saver_step = train_params['saver_step'],
z_rotate = train_params['z_rotate'],
encoder = encoder,
decoder = decoder,
encoder_args = enc_args,
decoder_args = dec_args
)
conf.experiment_name = experiment_name
conf.held_out_step = 5 # How often to evaluate/print out loss on
# held_out data (if they are provided in ae.train() ).
conf.save(osp.join(train_dir, 'configuration'))
load_pre_trained_ae = False
restore_epoch = 500
print ('hiii')
if load_pre_trained_ae:
conf = Conf.load(train_dir + '/configuration')
reset_tf_graph()
ae = PointNetAutoEncoder(conf.experiment_name, conf)
print ('hiii' ,ae)
ae.restore_model(conf.train_dir, epoch=restore_epoch)
print (ae)
# Build AE Model.
reset_tf_graph()
ae = PointNetAutoEncoder(conf.experiment_name, conf)
print ('hiii' ,ae)
# Train the AE (save output to train_stats.txt)
buf_size = 1 # Make 'training_stats' file to flush each output line regarding training.
fout = open(osp.join(conf.train_dir, 'train_stats.txt'), 'a', buf_size)
train_stats = ae.train(all_pc_data, conf, log_file=fout)
fout.close()
# Get a batch of reconstuctions and their latent-codes.
# Use any plotting mechanism such as matplotlib to visualize the results.
# +
feed_pc, feed_model_names, _ = all_pc_data.next_batch(10)
reconstructions = ae.reconstruct(feed_pc)[0]
latent_codes = ae.transform(feed_pc)
for x in range(0, 2048):
print (reconstructions[:][0, x])
# +
i = 0
print ('Shape of DATA =', reconstructions[i][:, 0])
print ('Shape of DATA =', reconstructions[i][:, 1])
print ('Shape of DATA =', reconstructions[i][:, 2])
i = 0
plot_3d_point_cloud(reconstructions[i][:, 0],
reconstructions[i][:, 1],
reconstructions[i][:, 2], in_u_sphere=True);
print (plot_3d_point_cloud(reconstructions[i][:, 0],
reconstructions[i][:, 1],
reconstructions[i][:, 2], in_u_sphere=True))
i = 2
plot_3d_point_cloud(reconstructions[i][:, 0],
reconstructions[i][:, 1],
reconstructions[i][:, 2], in_u_sphere=True);
print ('Shape of DATA =', reconstructions[i][:, 0])
print ('Shape of DATA =', reconstructions[i][:, 1])
print ('Shape of DATA =', reconstructions[i][:, 2])
# -
| notebooks/train_single_class_ae.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:welly2]
# language: python
# name: conda-env-welly2-py
# ---
# ## Well
#
# Some preliminaries...
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import welly
welly.__version__
import os
# env = %env
# ## Load a well from LAS
#
# Use the `from_las()` method to load a well by passing a filename as a `str`.
#
# This is really just a wrapper for `lasio` but instantiates a `Header`, `Curve`s, etc.
from welly import Well
w = Well.from_las('P-129_out.LAS')
tracks = ['MD', 'GR', 'RHOB', ['DT', 'DTS'], 'MD']
w.plot(tracks=tracks)
# ## Add a striplog
from striplog import Legend, Striplog
legend = Legend.builtin('NSDOE')
strip = Striplog.from_image('P-129_280_1935.png', 280, 1935, legend=legend)
strip.plot()
w.data['strip'] = strip
tracks = ['MD', 'strip', 'GR', 'RHOB', ['DT', 'DTS'], 'MD']
w.plot(tracks=tracks)
# ## Header
#
# Maybe should be called 'meta' as it's not really a header...
w.header
w.header.name
w.uwi # Fails because not present in this file. See one way to add it in a minute.
# ## Location and CRS
w.location
from welly import CRS
w.location.crs = CRS.from_epsg(2038)
w.location.crs
# Right now there's no position log — we need to load a deviation survey.
w.location.position
# ## Add deviation data to a well
p = Well.from_las('P-130_out.LAS')
dev = np.loadtxt('P-130_deviation_survey.csv', delimiter=',', skiprows=1)
# The columns are MD, inclination, azimuth, and TVD.
dev[:5]
# `add_deviation` assumes those are the columns, and computes a position log.
p.location.add_deviation(dev[:, :3], td=2618.3)
# The columns in the position log are _x_ offset, _y_ offset, and TVD.
p.location.position[:5]
| tutorial/Well.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This notebook uses the python api for [Google Earth Engine](https://earthengine.google.com/), a cloud-based remote sensing platform with access to petabytes of open-source remote sensing products. Data sets are already georeferenced and ready to go. Processing occurs on Google's servers which allows users to do planetary-scale analysis from a laptop.
#
# Google Earth Engine is free to use, but you must sign up for an account with your TAMU address. It typically takes a day or two to get apporved.
#
# GEE was first launched with a JavaScript API, but a python API has been developed since. I found [Qiusheng Wu's](https://github.com/giswqs/earthengine-py-notebooks) github very helpful in developing this notebook.
#
# In this notebook, I access [Landsat 5](https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LT05_C01_T1_SR) imagery and the [National Elevation Dataset (NED) DEM](https://developers.google.com/earth-engine/datasets/catalog/USGS_NED), and clip them to the [Lower Brazos Watershed](https://developers.google.com/earth-engine/datasets/catalog/USGS_WBD_2017_HUC04) - all accessed remotely from the [GEE repo](https://developers.google.com/earth-engine/datasets). And because this is all just displayed on a simple folium map, I can also add my Lower Brazos centerline from a local geoJSON file.
#
# +
import folium
import geojson
import ee
import geehydro
# This initializes your access to Google Earth engine resources
ee.Initialize()
# -
# The `ee` and `geehydro` packages are specific for Google Earth Engine. `ee` contains all the functions used by GEE and `geehydro` allows for a convenient integration of GEE with `folium`.
# ## Lower Brazos Example
# +
# Brazos River Centerline
# Open GeoJSON file
with open('LwrBrazos_Centerline_4326.geojson') as f:
brazos = geojson.loads(f.read())
# Strip coords out of json file and flip lat/long, wrap in tuples
tupes = [tuple(feature['geometry']['coordinates'][::-1]) for feature in brazos.features]
# Make Polyline of centerline
river = folium.PolyLine(locations=tupes)
# +
# Lower Brazos Watershed
# Load Lower Brazos Watershed from GEE repo
huc04 = ee.FeatureCollection('USGS/WBD/2017/HUC04')
lwr_b = huc04.filter(ee.Filter.eq('name', 'Lower Brazos'))
# Watershed visualization parameters
styleParams = {
'fillColor': '00000000',
'color': '000000',
'width': 2.0,
}
# Apply Style Parameter to FeatureCollection object
# - you can do this OOP style (here), or set the vis params when you add the layer to your map
subregions = lwr_b.style(**styleParams)
# +
# Landsat 5 Imagery
# Query Landsat 5 Imagery
# Steps:
# - Link to collection
# - Filter scenes by area
# - Filter by date
l5_collection = ee.ImageCollection("LANDSAT/LT05/C01/T1_SR") \
.filterBounds(lwr_b) \
.filterDate('1990-01-01', '1990-12-31')
# Flatten Image collection to sigle image
# - take median DN for every pixel of every band
# - clip scenes to watershed boundaries
l5_img = l5_collection.median().clip(lwr_b)
# Landsat 5 visualization parameters
l5_vis = {
'bands':['B3', 'B2', 'B1'],
'min': 350,
'max': 2200
}
# +
# DEM for watershed
# Querey National Elevation Dataset
dem = ee.Image('USGS/NED').select('elevation').clip(lwr_b)
# DEM visualization Parameters
dem_vis = {
'min': 0,
'max': 500
}
# -
# ## Create Landsat Map
# +
# Create Map
LS_Map = folium.Map(location = (30.5, -96), tiles = 'Stamen Terrain', zoom_start=7)
# Add Layers
LS_Map.addLayer(subregions, {}, 'Lower Brazos Watershed') # addLayer() is a GEE function
LS_Map.addLayer(l5_img, l5_vis, 'Landsat 5 Imagery')
LS_Map.add_child(river)
# Save & Show Map
LS_Map.save('LS_Map.html')
LS_Map
# -
# ## Create DEM Map
# +
# Create Map
DEM_Map = folium.Map(location = (30.5, -96), tiles = 'Stamen Toner', zoom_start=7)
# Add Layers
DEM_Map.addLayer(subregions, {}, 'Lower Brazos Watershed')
DEM_Map.addLayer(dem, dem_vis, 'Lower Brazos Watershed')
DEM_Map.add_child(river)
# Save and Show Map
DEM_Map.save('DEM_Map.html')
DEM_Map
# -
| HW1/Brazos_Elevation/BrazosElevation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Essentials
import os, sys, glob
import pandas as pd
import numpy as np
import nibabel as nib
import scipy.io as sio
# Stats
import scipy as sp
from scipy import stats
import statsmodels.api as sm
import pingouin as pg
# Plotting
import seaborn as sns
import matplotlib.pyplot as plt
plt.rcParams['svg.fonttype'] = 'none'
# -
import numpy.matlib
sys.path.append('/Users/lindenmp/Google-Drive-Penn/work/research_projects/normative_neurodev_cs_t1/1_code/')
from func import set_proj_env, get_synth_cov
train_test_str = 'train_test'
exclude_str = 't1Exclude' # 't1Exclude' 'fsFinalExclude'
parc_str = 'schaefer' # 'schaefer' 'lausanne'
parc_scale = 400 # 200 400 | 60 125 250
parcel_names, parcel_loc, drop_parcels, num_parcels, yeo_idx, yeo_labels = set_proj_env(exclude_str = exclude_str, parc_str = parc_str, parc_scale = parc_scale)
# output file prefix
outfile_prefix = exclude_str+'_'+parc_str+'_'+str(parc_scale)+'_'
outfile_prefix
# ### Setup directory variables
outputdir = os.path.join(os.environ['PIPELINEDIR'], '2_prepare_normative', 'out')
print(outputdir)
if not os.path.exists(outputdir): os.makedirs(outputdir)
figdir = os.path.join(os.environ['OUTPUTDIR'], 'figs')
print(figdir)
if not os.path.exists(figdir): os.makedirs(figdir)
# ## Load data
# +
# Load data
df = pd.read_csv(os.path.join(os.environ['PIPELINEDIR'], '1_compute_node_features', 'out', outfile_prefix+'df.csv'))
df.set_index(['bblid', 'scanid'], inplace = True)
df_node = pd.read_csv(os.path.join(os.environ['PIPELINEDIR'], '1_compute_node_features', 'out', outfile_prefix+'df_node.csv'))
df_node.set_index(['bblid', 'scanid'], inplace = True)
# adjust sex to 0 and 1
# now: male = 0, female = 1
df['sex_adj'] = df.sex - 1
print(df.shape)
print(df_node.shape)
# -
print('Train:', np.sum(df[train_test_str] == 0), 'Test:', np.sum(df[train_test_str] == 1))
# ## Normalize
metrics = ['ct', 'vol']
my_str = '|'.join(metrics); print(my_str)
norm_data = False
# +
if np.any(df_node.filter(regex = my_str, axis = 1) < 0):
print('WARNING: some regional values are <0, box cox will fail')
if np.any(df_node.filter(regex = my_str, axis = 1) == 0):
print('WARNING: some regional values are == 0, box cox will fail')
# +
rank_r = np.zeros(df_node.filter(regex = my_str).shape[1])
# normalise
if norm_data:
for i, col in enumerate(df_node.filter(regex = my_str).columns):
# normalize regional metric
x = sp.stats.boxcox(df_node.loc[:,col])[0]
# check if rank order is preserved
rank_r[i] = sp.stats.spearmanr(df_node.loc[:,col],x)[0]
# store normalized version
df_node.loc[:,col] = x
print(np.sum(rank_r < .99))
else:
print('Skipping...')
# -
# # Prepare files for normative modelling
# +
# Note, 'ageAtScan1_Years' is assumed to be covs[0] and 'sex_adj' is assumed to be covs[1]
# if more than 2 covs are to be used, append to the end and age/sex will be duplicated accordingly in the forward model
covs = ['ageAtScan1_Years', 'sex_adj']
print(covs)
num_covs = len(covs)
print(num_covs)
# -
extra_str_2 = ''
# ## Primary model (train/test split)
# +
# Write out training
df[df[train_test_str] == 0].to_csv(os.path.join(outputdir, outfile_prefix+'train.csv'))
df[df[train_test_str] == 0].to_csv(os.path.join(outputdir, outfile_prefix+'cov_train.txt'), columns = covs, sep = ' ', index = False, header = False)
# Write out test
df[df[train_test_str] == 1].to_csv(os.path.join(outputdir, outfile_prefix+'test.csv'))
df[df[train_test_str] == 1].to_csv(os.path.join(outputdir, outfile_prefix+'cov_test.txt'), columns = covs, sep = ' ', index = False, header = False)
# +
# Write out training
resp_train = df_node[df_node[train_test_str] == 0].drop(train_test_str, axis = 1)
mask = np.all(np.isnan(resp_train), axis = 1)
if np.any(mask): print("Warning: NaNs in response train")
resp_train.to_csv(os.path.join(outputdir, outfile_prefix+'resp_train.csv'))
resp_train.to_csv(os.path.join(outputdir, outfile_prefix+'resp_train.txt'), sep = ' ', index = False, header = False)
# Write out test
resp_test = df_node[df_node[train_test_str] == 1].drop(train_test_str, axis = 1)
mask = np.all(np.isnan(resp_test), axis = 1)
if np.any(mask): print("Warning: NaNs in response train")
resp_test.to_csv(os.path.join(outputdir, outfile_prefix+'resp_test.csv'))
resp_test.to_csv(os.path.join(outputdir, outfile_prefix+'resp_test.txt'), sep = ' ', index = False, header = False)
print(str(resp_train.shape[1]) + ' features written out for normative modeling')
# -
# ### Forward variants
# +
# Synthetic cov data
x = get_synth_cov(df, cov = 'ageAtScan1_Years', stp = 1)
if 'sex_adj' in covs:
# Produce gender dummy variable for one repeat --> i.e., to account for two runs of ages, one per gender
gender_synth = np.concatenate((np.ones(x.shape),np.zeros(x.shape)), axis = 0)
# concat
synth_cov = np.concatenate((np.matlib.repmat(x, 2, 1), np.matlib.repmat(gender_synth, 1, 1)), axis = 1)
print(synth_cov.shape)
# write out
np.savetxt(os.path.join(outputdir, outfile_prefix+'cov_test_forward.txt'), synth_cov, delimiter = ' ', fmt = ['%.1f', '%.d'])
| 1_code/2_prepare_normative.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import esm
import torch
from argparse import Namespace
from esm.constants import proteinseq_toks
import math
import torch.nn as nn
import torch.nn.functional as F
from esm.modules import TransformerLayer, PositionalEmbedding # noqa
from esm.model import ProteinBertModel
# model, alphabet = torch.hub.load("facebookresearch/esm", "esm1_t34_670M_UR50S")
import esm
# -
from ych_util import prepare_mlm_mask
import pandas as pd
import time
motor_toolkit = pd.read_csv("../../data/esm/motor_toolkit.csv")
motor_toolkit = motor_toolkit.sample(frac = 1)
motor_toolkit.head()
alphabet = esm.Alphabet.from_dict(proteinseq_toks)
# model_name = "esm1_t34_670M_UR50S"
model_name = "esm1_t12_85M_UR50S"
url = f"https://dl.fbaipublicfiles.com/fair-esm/models/{model_name}.pt"
if torch.cuda.is_available():
print("cuda")
model_data = torch.hub.load_state_dict_from_url(url, progress=False)
else:
model_data = torch.hub.load_state_dict_from_url(url, progress=False, map_location=torch.device('cpu'))
pra = lambda s: ''.join(s.split('decoder_')[1:] if 'decoder' in s else s)
prs = lambda s: ''.join(s.split('decoder.')[1:] if 'decoder' in s else s)
model_args = {pra(arg[0]): arg[1] for arg in vars(model_data["args"]).items()}
model_state = {prs(arg[0]): arg[1] for arg in model_data["model"].items()}
# +
model = esm.ProteinBertModel(
Namespace(**model_args), len(alphabet), padding_idx=alphabet.padding_idx
)
model.load_state_dict(model_state)
# +
# model.load_state_dict(torch.load("../../data/esm1_t12_85M_UR50S_balanced_201102.pt"))
# -
model.cuda()
model.train()
batch_converter = alphabet.get_batch_converter()
criterion = nn.CrossEntropyLoss()
lr = 0.0001 # learning rate
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)
start_time = time.time()
print_every = 1000
for j in range(10):
for i in range(motor_toolkit.shape[0]):
if len(motor_toolkit.iloc[i,7])>1024:
continue
data = [(motor_toolkit.iloc[i,0], motor_toolkit.iloc[i,7])]
batch_labels, batch_strs, batch_tokens = batch_converter(data)
true_aa,target_ind,masked_batch_tokens = prepare_mlm_mask(alphabet,batch_tokens)
optimizer.zero_grad()
results = model(masked_batch_tokens.to('cuda'), repr_layers=[34])
pred = results["logits"].squeeze(0)[target_ind,:]
target = true_aa.squeeze(0)
loss = criterion(pred.cpu(),target)
loss.backward()
optimizer.step()
if i % print_every == 0:
print(batch_labels)
print(batch_strs)
print(batch_tokens.size())
print(masked_batch_tokens.size())
print(results["logits"].size())
print(pred.size())
print(target.size())
print(f"At Epoch: %.1f"% i)
print(f"Loss %.4f"% loss)
elapsed = time.time() - start_time
print(f"time elapsed %.4f"% elapsed)
torch.save(model.state_dict(), "../../data/esm1_t12_85M_UR50S_motor_toolkit_201102.pt")
# loss_vector.append(loss)
# break
torch.save(model.state_dict(), "../../data/esm1_t12_85M_UR50S_motor_toolkit_201102.pt")
print("done")
| code/transformer_201102/esm_transformer_mlm_motor_toolkit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Load functions and find building ranges
#
# This step is needed for both extracting the building info and for visualizing player trajectory from log data
#
# To find the range of building, go to https://pessimistress.github.io/minecraft/ and drag in .mca files in the region folder to find the regions and chunks that contain the building
# * bottom left block -> (x_low, z_high)
# * top right block -> (x_high, z_low)
#
# If your find the building in an .mca file named `r.-5.0.mca`, then the region indices are the two numbers separated by periods, i.e., -5 and 0. Thus the region is (-5,0)
#
# If the building is encoded in multiple .mca files, run this code multiple times with different output sub folder names, then use Tool A-2 to merge the PNG, JSON, and CSV files.
# +
## pip3 install nbtlib
import map_generator as mg
world_name = 'Saturn_Feb4' ## 'Falcon v1.02' ## name of the Minecraft world folder in input/worlds/
region = (-5,0) ## name of the .mca file where you can find the building
building_ranges = { ## x_low, x_high, z_low, z_high, y_low, y_high
'Singleplayer': (-2192, -2129, 144, 191, 28, 30),
'Sparky v3.03': (-2176, -2097, 144, 207, 52, 54),
# 'Saturn_Feb4': (-2240, -2065, -96, -1, 22, 25), # r.-5.-1.mca
'Saturn_Feb4': (-2240, -2065, 1, 143, 22, 25), # r.-5.0.mca
# 'Falcon v1.02': (-2112, -2049, 128, 207, 60, 62) # r.-5.0.mca
# 'Falcon v1.02': (-2048, -2017, 128, 207, 60, 62) # r.-4.0.mca
}
ranges = building_ranges[world_name]
# -
# # Tool A - Generate map images, json, and csv
#
# The outputs include three parts:
# * images of the 2D floor plans on different levels of the region of interest at `outputs/*_map.png` where * = 0,1,2,9 (9 means a summary of floor 0 and 1, including victims, doors, signage, switches, etc.)
# * a json file specifying the block type of all blocks within the region of interest at `outputs/blocks_in_building.json`
# * a csv file that represent the region of interest that can be used as input to our gridworld framework at `outputs/darpa_maze.csv`
#
# Input world: name of a world folder in Minecraft saves, can be found in
# * /User/.minecraft/saves/
# * /Users/USER_NAME/Library/Application\ Support/minecraft/saves/
# * MalmoPlatform/Minecraft/run/saves/ (if using Malmo)
# +
import sys
sys.path.append('MCWorldlib.egg')
import mcworldlib as mc
from os.path import join
## load the world folder, which takes a while but only need to do it once
world = mc.load(join('inputs', 'worlds', world_name))
# +
import map_generator as mg
all_blocks, important_blocks = mg.generate_maps(world, region, ranges)
mg.generate_json(all_blocks, ranges)
mg.generate_csv(important_blocks, ranges)
# -
# ## Tool A-2 - Merge png/json in multiple regions
#
# In the case of Falcon, the building is found in both region (-4,0) and (-5,0). Thus we need to concat the images of the map together, as well as generate one single file for 'blocks_in_building.json'
#
# Here I put all my past outputs as subfolders in a folder named "_old" in "outputs"
# +
import map_generator as mg
from os.path import join
folder1 = join('outputs','_old','210218 Saturn -5.-1')
folder2 = join('outputs','_old','210218 Saturn -5.0')
## generate '*_map.png' and 'blocks_in_building.json' in output folder
mg.merge_folders(folder1, folder2)
# +
import map_generator as mg
from os.path import join
folder1 = join('outputs','_old','200617 Falcon -5.0')
folder2 = join('outputs','_old','200617 Falcon -4.0')
## generate '*_map.png' and 'blocks_in_building.json' in output folder
mg.merge_folders(folder1, folder2)
# -
# # Tool B: Generate traces from data stream
# +
import trace_generator as tg
from os.path import join
world_name = 'Falcon v1.02'
## here we make a subfolder named 'Hackathon/' in both 'input/trajectories/' and 'output_trajectory/'
data_name = join('Hackathon', 'ASIST_data_study_id_000001_condition_id_000005_trial_id_000015_messages.json')
player_name = 'ASU_MC' # for trial id 8-15; # 'K_Fuse' for trial id 1-7
count_gap = 5
ranges = { ## x_low, x_high, z_low, z_high, y_low, y_high
'Singleplayer': (-2192, -2129, 144, 191, 28, 30),
'Sparky v3.03': (-2176, -2097, 144, 207, 52, 54),
'Falcon v1.02': (-2112, -2017, 128, 207, 60, 62) # the whole Falcon building for visualizing trajectories
}[world_name]
text_margin = {
'Singleplayer': (830,20),
'Sparky v3.03': (1120,20),
'Falcon v1.02': (1370,540)
}[world_name]
steps = tg.read_data(data_name, player_name, count_gap, ranges)
tg.generate_mp4(steps, text_margin, ranges)
| run_map_generator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.4.7
# language: julia
# name: julia-0.4
# ---
using KernelDensityEstimate
using Gadfly
using Distributions, StatsBase
using IteratedProbabilisticClassification
# don't mind syntax warnings, these are being removed from package dependencies over time
# +
# modified EM classification algorithm has these major tuning parameter
params = TuningParameters(
Mfair=300, # Cluster distribution approximation accuracy, more is better: 50 is bare minium, and computation slow above ~1000
EMiters=40 # expectation maximization iterations to refine Cluster belief estimate classification assignments: 5 is minimum, and diminishing returns beyond ~70
)
# Example: simulated data with ground truth for reference
N1, N2 = 5000, 1000
# N1, N2 = 4000, 3000
data, groundtruth = simdata02_true(N1=N1,N2=N2)
# actual distribution of data
Gadfly.set_default_plot_size(15cm, 10cm)
# what does the data look like
plot(x=data.samples[1,:],y=data.samples[2,:],Geom.histogram2d)
# -
# Ground truth distribution of underlying data clusters
Gadfly.set_default_plot_size(15cm, 10cm)
# a generic 1D plotting tool for IteratedProbablisticClassification
plotUtil2D([2;1],groundtruth=groundtruth)
# +
# belief from expert prediction, also used for initialization
c1_expt_pts = rand(MvNormal([5.0;5.0],[[3.0;-1.0]';[-1.0;3.0]']),100)
c1_expert = kde!(([5.0,5.0]')', [3.0,3.0]) # Pretty much a normal distribution
c1_expert = kde!(c1_expt_pts)
c2_expert = kde!(([-5.0,-6.0]')', [4.0,4.0]) # Pretty much a normal distribution
# cs is the main structure which classification algorithm will operate on, and modify during execution
cs = ClassificationSystem(
2, # number of categories or classes available for assignment
Dict(1=>"class1", 2=>"class2"), # names of classes
Dict(1=>"blue", 2=>"red"), # default plotting colors
Dict(1=>c1_expert, 2=>c2_expert), # user forcing behaviour (expert guidance); required
Dict(1=>deepcopy(c1_expert), 2=>deepcopy(c2_expert)), # initialize temporal prediction (forward-backward smoothing)
Dict(1=>deepcopy(c1_expert), 2=>deepcopy(c2_expert)), # initialize current belief (0 iterations)
rand(Categorical([0.5;0.5]),length(data.samples)) # initialize samples with random assignment, 50/50% in this 2 class case
);
println()
# +
# expert may modify expertBelief using output from previous run of classification algorithm
# pts = getPoints(cs.currentBelief[1])
# cs.expertBelief[1] = kde!(dispersePoints(pts, MvNormal([-1.0],[1.0]) ))
# pts = getPoints(cs.currentBelief[2])
# cs.expertBelief[2] = kde!(dispersePoints(pts, MvNormal([2.0],[1.0]) ))
# println()
# -
Gadfly.set_default_plot_size(20cm, 7cm)
# plotUtil1D(cs=cs) # if we didn't have gt
plotUtil2D(1,cs=cs)
# +
# simulation data allows us to debug with absolute knowledge
dbg = defaultDebugResults()
# do the classification
stats = EMClassificationRun!(params, cs, data, debug=dbg, groundtruth=groundtruth);
println()
# -
Gadfly.set_default_plot_size(20cm, 10cm)
plotUtil1D(cs=cs, groundtruth=groundtruth,
drawcurrent=true,
expertcolor=Dict(1=>"gray",2=>"gray")
)
sum(cs.assignment .== 1), sum(cs.assignment .== 2)
Gadfly.set_default_plot_size(20cm, 10cm)
plot(
layer(x=data.samples[1,cs.assignment .== 2], Geom.histogram,Theme(default_color=colorant"red")),
layer(x=data.samples[1,cs.assignment .== 1], Geom.histogram)
)
# proxy to convergence of classification algorithm, always available from stats structure
Gadfly.set_default_plot_size(20cm, 15cm)
plotEMStatus(params,stats)
# only available when ground truth data is available
Gadfly.set_default_plot_size(20cm, 15cm)
plotClassificationStats(params, dbg)
# +
# pts = getPoints(cs.expertBelief[1])
# pts2 = dispersePoints(pts, MvNormal([0.0],[3.0]) )
# p2 = kde!(pts2)
# plotKDE([cs.expertBelief[1];p2])
# -
| examples/FCBClassification2DExample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bangalore House Price Prediction Using Linear Regression
import pandas as pd
path = r"https://drive.google.com/uc?export=download&id=1xxDtrZKfuWQfl-6KA9XEd_eatitNPnkB"
df = pd.read_csv(path)
df.head()
X=df.drop("price",axis=1)
y=df["price"]
print("shape of X:",X.shape)
print("shape of y:",y.shape)
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=51)
print("shape of X_train:",X_train.shape)
print("shape of y_train:",y_train.shape)
print("shape of X_test:",X_test.shape)
print("shape of y_test:",y_test.shape)
from sklearn.preprocessing import StandardScaler
sc=StandardScaler()
sc.fit(X_train)
X_train=sc.transform(X_train)
X_test=sc.transform(X_test)
from sklearn.linear_model import LinearRegression
lr=LinearRegression()
lr.fit(X_train,y_train)
lr.coef_
lr.intercept_
X_test[0,:]#(single house info)
lr.predict([X_test[0,:]])
lr.predict(X_test)
y_test
lr.score()
| Linear_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # 1. Import Library
import pandas as pd
import numpy as np
import missingno as msno
import matplotlib.pyplot as plt
from pyecharts import options as opts
from pyecharts.charts import Scatter3D
from pyecharts.faker import Faker
# # 2. Import Data & Preprocessing
# ## (1) Import Data
train = pd.read_csv("../data/train.csv")
test = pd.read_csv("../data/test.csv")
submission = pd.read_csv("../data/sample_submission.csv")
# ## (2) Null Check
# 결측치를 체크하는 가장 좋은 라이브러리는 missingno입니다. 본 대회에서는 결측치가 없어 missingno의 장점을 확인할 수 없었습니다.
data = train.drop(['id', 'target'], axis=1)
data.isna().sum().sum()
# +
# msno.bar(data);
# +
# msno.dendrogram(data);
# +
#msno.heatmap(data);
# -
msno.matrix(data);
# ## (3) Analysis
# 센서의 최댓값이 127.16이고 센서의 최솟값이 -127.8인것으로 보아, 0~255까지의 흑백 사진이라고 추측하여 아래 3(1)-3(3) 과정을 진행하였습니다.<br>
# "손에 부착된 센서의 데이터"라고 언급되었으므로, 관점을 달리하여 센서의 z축 값이라고도 추측하여 아래 3(4) 과정도 진행하였습니다.
data.max().max()
data.min().min()
# # 3. Visualization
# ## (1) 8 x 4 Samples
# 손동작별로 10가지씩 샘플 데이터를 랜덤하게 추출하여 8x4 사이즈로 시각화하였습니다.
def imshow_8x4(data, target, number):
df = data[data['target']==target] # target 손동작에 해당하는 데이터들만 뽑기
ids = np.random.choice(df.id, number) # 랜덤으로 number개 추출
fig, ax = plt.subplots(int(number/5), 5, figsize=(20, number*1.5))
ax = ax.ravel()
plt.suptitle(f'Target: {target}', size=30, weight='bold')
for i in range(number):
ax[i].set_title(f'ID: {ids[i]}', size=16)
ax[i].imshow(np.array(df.loc[int(ids[i])-1])[1:-1].reshape(8, 4), cmap='gray', vmin=-255/2, vmax=255/2)
return None
imshow_8x4(train, 0, 10)
imshow_8x4(train, 1, 10)
imshow_8x4(train, 2, 10)
imshow_8x4(train, 3, 10)
# ## (2) 4 x 8 Samples
# 손동작별로 10가지씩 샘플 데이터를 랜덤하게 추출하여 4x8 사이즈로 시각화하였습니다.
def imshow_4_8(data, target, number):
df = data[data['target']==target] # target 손동작에 해당하는 데이터들만 뽑기
ids = np.random.choice(df.id, number) # 랜덤으로 number개 추출
fig, ax = plt.subplots(int(number/3), 3, figsize=(20, number*1.5))
ax = ax.ravel()
plt.suptitle(f'Target: {target}', size=30, weight='bold')
for i in range(number):
ax[i].set_title(f'ID: {ids[i]}', size=16)
ax[i].imshow(np.array(df.loc[int(ids[i])-1])[1:-1].reshape(4, 8), cmap='gray', vmin=-255/2, vmax=255/2)
return None
imshow_4_8(train, 0, 12)
imshow_4_8(train, 1, 12)
imshow_4_8(train, 2, 12)
imshow_4_8(train, 3, 12)
# ## (3) Target Means
# 각 타겟들의 센서들을 모두 평균낸 후 시각화
fig, ax = plt.subplots(1, 4, figsize = (20, 10))
for i in range(4):
ax[i].set_title(f'Target: {i}')
ax[i].imshow(np.array(train.drop('id', axis=1).groupby('target').mean().iloc[i]).reshape(8, 4),
cmap='gray')
# 각 타겟들의 센서들을 모두 평균내고 cmap 범위를 정해놓은 후 시각화
fig, ax = plt.subplots(1, 4, figsize = (12, 6))
for i in range(4):
ax[i].set_title(f'Target: {i}')
ax[i].imshow(np.array(train.drop('id', axis=1).groupby('target').mean().iloc[i]).reshape(8, 4),
cmap='gray',
vmin=np.array(train.drop('id', axis=1).groupby('target').mean()).min(),
vmax=np.array(train.drop('id', axis=1).groupby('target').mean()).max())
# ## (4) 3D Views
# 손에 부착된 센서이므로, 센서의 데이터값이 z좌표값이라고도 생각해보았습니다. <br>시각화를 어떻게 해볼지 고민하다가, 임의로 x축과 y축을 순서대로 놓은 후 시각화를 진행해보았습니다. <br> 이 논리가 맞다면, 데이터를 수집할 때 손에 센서를 부착하고 -127~127 높이의 박스 안에 집어넣었을 듯합니다.
def show_3d_8_4(data, target, number):
df = data[data['target']==target] # target 손동작에 해당하는 데이터들만 뽑기
ids = np.random.choice(df.id, number) # 랜덤으로 number개 추출
for n in range(number):
coordinates = []
for i in range(4):
for j in range(8):
coordinates.append([i, j, df.loc[ids[n]-1][1:-1][i*8+j]])
scatter3D = (Scatter3D()
.add("", coordinates)
.set_global_opts(
title_opts=opts.TitleOpts(f"Target: {target}, ID: {ids[0]}"),
visualmap_opts=opts.VisualMapOpts(range_color=Faker.visual_color))
)
return scatter3D.render_notebook()
show_3d_8_4(train, 0, 1)
show_3d_8_4(train, 1, 1)
show_3d_8_4(train, 2, 1)
show_3d_8_4(train, 3, 1)
def show_3d_4_8(data, target, number):
df = data[data['target']==target] # target 손동작에 해당하는 데이터들만 뽑기
ids = np.random.choice(df.id, number) # 랜덤으로 number개 추출
for n in range(number):
coordinates = []
for i in range(8):
for j in range(4):
coordinates.append([i, j, df.loc[ids[n]-1][1:-1][i*4+j]])
scatter3D = (Scatter3D()
.add("", coordinates)
.set_global_opts(
title_opts=opts.TitleOpts(f"Target: {target}, ID: {ids[0]}"),
visualmap_opts=opts.VisualMapOpts(range_color=Faker.visual_color))
)
return scatter3D.render_notebook()
show_3d_4_8(train, 0, 1)
show_3d_4_8(train, 1, 1)
show_3d_4_8(train, 2, 1)
show_3d_4_8(train, 3, 1)
| hand-gesture-classification/code/EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''.venv'': poetry)'
# name: python3
# ---
# + [markdown] id="GWa6Z8E2RyJ0"
# # Нейросети и PyTorch (Часть 2)
# -
# > 🚀 В этой практике нам понадобятся: `numpy==1.21.2, pandas==1.3.3, matplotlib==3.4.3, scikit-learn==0.24.2, torch==1.9.1`
#
# > 🚀 Установить вы их можете с помощью команды: `!pip install numpy==1.21.2, pandas==1.3.3, matplotlib==3.4.3, scikit-learn==0.24.2, torch==1.9.1`
#
# # Содержание <a name="content"></a>
#
# * [Многослойные сети](#Mnogoslojnye_seti)
# * [Задание - обучение многослойной нейронной сети](#Zadanie_-_obuchenie_mnogoslojnoj_nejronnoj_seti)
# * [Как оценить работу нейросети?](#Kak_otsenit__rabotu_nejroseti?)
# * [Я выбираю нелинейность!](#Ja_vybiraju_nelinejnost_!)
# * [Задание - нелинейная сеть](#Zadanie_-_nelinejnaja_set_)
# * [Нейросеть для классификации](#Nejroset__dlja_klassifikatsii)
# * [Как сохранить модель?](#Kak_sohranit__model_?)
# * [Результаты](#Rezul_taty)
# * [Выводы - задание](#Vyvody_-_zadanie)
#
# + id="W-8_Ohrjhm2Z" _cell_id="jVwIvV3JSb9QkDvh"
# Импорт необходимых модулей
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import random
import torch
# Настройки для визуализации
# Если используется темная тема - лучше текст сделать белым
TEXT_COLOR = 'black'
matplotlib.rcParams['figure.figsize'] = (15, 10)
matplotlib.rcParams['text.color'] = 'black'
matplotlib.rcParams['font.size'] = 14
matplotlib.rcParams['axes.labelcolor'] = TEXT_COLOR
matplotlib.rcParams['xtick.color'] = TEXT_COLOR
matplotlib.rcParams['ytick.color'] = TEXT_COLOR
# Зафиксируем состояние случайных чисел
RANDOM_STATE = 0
random.seed(RANDOM_STATE)
# Добавляется специфичнвя для torch фиксация сида
torch.manual_seed(RANDOM_STATE)
np.random.seed(RANDOM_STATE)
# + [markdown] id="5eDD4mWwR1Jk"
# На данный момент мы отлично справляемся с нейросетью, состоящей из одного нейрона! Настало время попробовать сделать многослойную сеть!
# + _cell_id="w37zRpXzt6l7wGXA"
from torch import nn
# -
# Возьмём те же данные, что были в предыдущем ноутбуке для однослойной сети.
# + _cell_id="9gAGufXuBVrfDBPU"
n_points = 100
rng = np.random.default_rng(RANDOM_STATE)
X_data = 4*np.sort(rng.random((n_points, 1)), axis=0)+1
noize = 1*(rng.random((n_points, 1))-0.5)
real_W = [2, 0.7]
y_data_true = real_W[0] + real_W[1]*X_data
y_data_noized = y_data_true + noize
y_data = y_data_noized[:, 0]
# + [markdown] id="tOLVRQRGrcMS"
# # Многослойные сети
# + [markdown] id="kROQymoxre8D"
# Самый простой способ создания многослойной сети - это сделать два модуля слоя и выполнить один за другим!
# + colab={"base_uri": "https://localhost:8080/", "height": 69} executionInfo={"elapsed": 4798, "status": "ok", "timestamp": 1602520182681, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="42jAKlSksoko" outputId="391c28df-1bc7-4e1c-e270-cbd6c4d4396a" _cell_id="FKVKkEe2XFSKkEr6"
torch.manual_seed(RANDOM_STATE)
# Делаем два нейрона в первом слое
layer1 = nn.Linear(1, 2)
# Так как в предыдущем два нейрона, то здесь два входа
# Чтобы получить один выход всей сети, то у последнего слоя выход должен быть 1
layer2 = nn.Linear(2, 1)
# Данные примера
X_sample = torch.tensor([[1], [2], [3]]).float()
# Исполняем один за другим
l1_data = layer1(X_sample)
y_pred_tnsr = layer2(l1_data)
# Смотрим на предсказания
y_pred_tnsr
# + [markdown] id="mBtf1dWzvmXy"
# В целом, можно было бы на этом оставить всё как есть, но современные сети, как правило, содержат очень много слоёв и было бы сложно выполнять их через циклы и ещё хранить промежуточные значения.
#
# > 🔥 Мы сейчас разберём, как оптимизировать работу со многими слоями, но важно помнить, что при необходимости исследования можно сеть разобрать послойно и получить значения с любого из слоев.
#
# Первый способ объединения модулей в один является использвание [`nn.Sequential`](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html). Принцип работы с ним в том, что он объединяет операции в последовательность:
# + colab={"base_uri": "https://localhost:8080/", "height": 138} executionInfo={"elapsed": 4770, "status": "ok", "timestamp": 1602520182681, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="xhFv0yMFJ2hD" outputId="37897310-e66e-4f1a-ec7e-96a65d49ba86" _cell_id="Iiu0c0fUZnYCQxmx"
seq_module = nn.Sequential(layer1, layer2)
print(seq_module)
y_pred_tnsr = seq_module(X_sample)
y_pred_tnsr
# + [markdown] id="4hKjYpn0Jpe-"
#
# Другим способом является написание класса модели, который наследуется от `nn.Module`. Посмотрим, как это делается:
# + id="vlmrWNHIwaoe" _cell_id="9dgjwC401HqVjeGE"
class MyLinearModel(nn.Module):
def __init__(self):
# На этой строке вызывается конструктор класса, от которого наследуемся
# Она нужна, чтобы корректно настроить класс
super().__init__()
torch.manual_seed(RANDOM_STATE)
self.layer1 = nn.Linear(1, 2)
self.layer2 = nn.Linear(2, 1)
# Метод, который нужно написать, чтобы вызов работал! Всегда должен называться forward!
def forward(self, X):
l1_data = self.layer1(X)
y_pred = self.layer2(l1_data)
return y_pred
# + colab={"base_uri": "https://localhost:8080/", "height": 138} executionInfo={"elapsed": 4734, "status": "ok", "timestamp": 1602520182687, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="bWxHg6XUwxOC" outputId="3333c866-ce27-47e7-9ea2-c04440b9e95a" _cell_id="YZ2McYrAQ6luOPyl"
model = MyLinearModel()
# При отображении показываются все слои внутри модели
print(model)
# Вот именно в этот момент происходит вызов метода forward() класса
y_pred_tnsr = model(X_sample)
y_pred_tnsr
# + [markdown] id="6OvR_7bYxNhL"
# Таким образом создаётся класс, который содержит все необходимые слои (можно даже использовать `nn.Sequential` и другие вспомогательные классы внутри) и в нём пишется метод `forward()`, в котором прописываются действия со слоями. Потом объект этого класса можно просто вызывать и получать результат метода `forward()`! Отличная инкапсуляция!
# + [markdown] id="-Lzc6KUtx_pE"
# ## Задание - обучение многослойной нейронной сети
#
# Самое время обучить модель и понять, лучше или хуже она работает, чем однослойная с одним нейроном. Используйте для этого функции, полученные из предыдущего ноутбука
# + _cell_id="0LB9qFGH8sLBIs5f"
def predict(model, X):
return model(X)
# + _cell_id="nnAqdg1wK1wiSicv"
def fit_model(model, optim, loss_op, X: np.ndarray, y: np.ndarray, n_iter: int):
"""
model:
Модель для обучения
optim:
Модуль оптимизатора
loss_op:
Модуль функции потерь
X:
Матрица данных
y:
Вектор ground-truth значений
n_iter:
Количество итераций обучения (эпох)
"""
# Вставьте сюда код из предыдущего ноутбука =)
loss_values = []
return loss_values
# + id="juU6SjWbyN2V" _cell_id="ZzDsY0hvnRigKVme"
# TODO - обучите многослойную сеть и отобразите историю обучения
# и предсказания обученной модели (на графике с данными)
model = MyLinearModel()
# + _cell_id="Sf5rGWxiFmk1Wudr"
def plot_model(X, y_pred, y_true):
plt.scatter(X, y_true, label='Данные')
plt.plot(X, y_pred, 'k--', label='Предсказание модели')
plt.ylabel('$Y$')
plt.xlabel('$X$')
plt.grid()
plt.legend()
plt.show()
# + _cell_id="9zYAc4npdGOs8z0X"
def preprocess_vector(x: list):
# Вставьте сюда код из предыдущего ноутбука =)
return None
# + _cell_id="r4iK9dGP60tTHPbD"
# TEST
_test_tnsr = preprocess_vector([3.5])
_test_pred = model(_test_tnsr)
np.testing.assert_array_almost_equal(_test_pred.detach(), [[4.5]], decimal=1)
assert loss_history[-1] < 0.22
# + [markdown] id="WQavJdXGy2UH"
# > ⚠️ Подумайте над идеей того, что модель теперь состоит из двух слоёв и суммарно трёх нейронов, но при этом характер предсказания (прямая линия) не изменился. Таким образом, нейронная сеть из слоёв с линейной активацией не даст ничего, кроме линейной модели!
# + [markdown] id="a6REEoV3WuXv"
# # Как оценить работу нейросети?
#
# Нейросеть = ещё один вид модели машинного обучения. При том, что мы решали задачу регрессии, то и все метрики, которые мы использовали для моделей регрессии, применимы здесь!
#
# Аналогично, сейчас рассмотрим задачу классификации, но и там такие же принципы для оценки.
# + id="7QaCu3t9jkfp" _cell_id="aWdjsBLmz6sGfUzx"
# TODO - напишите функцию оценки работы модели по метрике R2
# (не забудьте импорт нужной функции из sklearn)
def evaluate_r2(model, X, y):
return r2_value
# + colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"elapsed": 4640, "status": "ok", "timestamp": 1602520182701, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="q7EChNnYg-5p" outputId="4ac00ffc-5280-499d-8c38-a96b340b04a5" _cell_id="4gG79thPyuri9cw2"
# TEST
r2_value = evaluate_r2(model, X_data, y_data)
print(f'R2: {r2_value}')
np.testing.assert_almost_equal(r2_value, 0.83848, decimal=5)
# -
# > ⚠️ Не забывайте, что мы учимся работать с фреймворком - полученная оценка на обучающей выборке не имеет большого смысла (просто пример), при реальной разработке оценивать работу модели нужно как всегда на выделенной выборке (например, тестовой)
# + [markdown] id="2KGsO_ki0DVV"
# # Я выбираю нелинейность!
# + [markdown] id="f8vgsDnd0Gg_"
# <p align="center"><img src="https://raw.githubusercontent.com/kail4ek/ml_edu/master/assets/nonlinear.jpg" width=600/></p>
#
# Для работы с данными, в которых нам нужно определить линейные зависимости, достаточно нейросети без активации (или просто нейрона). Теперь, во-первых, давайте начнём работать с данными, которые имеют явную нелинейность, а во-вторых, посмотрим, как добавить слою функцию активации, чтобы добавить нейросети свойство нелинейности.
#
# Вот так будут выглядеть наши данные:
# + colab={"base_uri": "https://localhost:8080/", "height": 614} executionInfo={"elapsed": 5433, "status": "ok", "timestamp": 1602520183527, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="gmu12mNw1YQz" outputId="0641c992-0449-4081-ec11-93dad850fa55" _cell_id="P8JNZNQuOvixg8Zc"
rng = np.random.default_rng(RANDOM_STATE)
X_data = np.linspace(2, 10, 100)[:, None]
y_data = 1/X_data[:,0]*5 + rng.standard_normal(size=X_data.shape[0])/7 + 2
# y_data = (-1)*X_data[:,0]**2+(10)*X_data[:,0] + np.random.normal(size=X_data.shape[0]) + 5
# Посмотрим на данные
plt.scatter(X_data[:,0], y_data)
plt.grid(True)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.show()
# + [markdown] id="6H56vfhJ1mHl"
# Отлично, линейная модель тут уже вряд ли справится, нам нужно научиться добавлять слоям нелинейную активацию!
#
# Давайте начнём с одного слоя и сделаем ему функцию активации:
# + colab={"base_uri": "https://localhost:8080/", "height": 173} executionInfo={"elapsed": 5406, "status": "ok", "timestamp": 1602520183528, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="KgUDAdmY14PD" outputId="afc16f30-3fba-4219-be5d-fc3b7bedf28c" _cell_id="wMJ9cGcni3NlsA9m"
# Создаем слой и задаём свой вес
layer = nn.Linear(1, 1, bias=False)
layer.weight.data.fill_(10)
# Создаем модуль сигмоиды
activation_func = nn.Sigmoid()
# Данные для примера
X_sample = torch.tensor([[-10], [0], [10]]).float()
print(f'Input: {X_sample}')
# Исполняем вычисления результатов слоя
layer_result = layer(X_sample)
print(f'Layer result: {layer_result}')
# Исполняем модуль сигмоиды
act_result = activation_func(layer_result)
print(f'Activation result: {act_result}')
# + [markdown] id="9nybLqrz22Qe"
# Вот таким несложным образом можно добавить в нейросеть нелинейность. Можно создать модуль и исполнять его.
#
# Другим более предпочтительным способом является не создание модуля, а просто исполнение функции сигмоиды:
# + colab={"base_uri": "https://localhost:8080/", "height": 173} executionInfo={"elapsed": 5380, "status": "ok", "timestamp": 1602520183529, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="K56cKQMG3M3W" outputId="a9c58ba3-427e-44e2-bed4-7a4f755a7983" _cell_id="4Wva1kQOvyMX6XCi"
layer_result = layer(X_sample)
act_result = torch.sigmoid(layer_result)
print(f'Input: {X_sample}')
print(f'Layer result: {layer_result}')
print(f'Activation result: {act_result}')
# + [markdown] id="HmLXxpOp3jIN"
# Мы видим аналогичный результат как по значениям, так и по функции `grad_fn`. То есть при написании класса модели можно прямо в методе `forward()` вызывать функцию сигмоиды (или другой функции активации).
# + [markdown] id="Ixf3aHTF6hM5"
# ## Задание - нелинейная сеть
#
# Реализуйте код двуслойной сети по архитектуре `[2, 1]`:
# - 2 нейрона в скрытом слое;
# - 1 нейрон в выходной слое.
#
# Скрытый слой должен иметь сигмоиду в качестве активации, выходной слой должен иметь линейную активацию (без активации). После написания обучите модель и посмотрите на предсказания модели:
# + id="3VNwBxxJ7KXJ" _cell_id="KDdUSHFYkxue8nD2"
# TODO - реализуйте модель нейронной сети с нелинейностью,
# обучите и оцените модель
class NonlinearNeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(RANDOM_STATE)
# И здесь надо создать слои
def forward(self, X):
return y_pred
# + _cell_id="D1YbBWQAIv3IsdfH"
# TEST
_test_model = NonlinearNeuralNetwork()
_test_tnsr = preprocess_vector([10])
_test_pred = _test_model(_test_tnsr)
np.testing.assert_array_almost_equal(_test_pred.shape, (1, 1))
np.testing.assert_array_almost_equal(_test_pred.detach(), [[0.0949331]], decimal=4)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"elapsed": 5336, "status": "ok", "timestamp": 1602520183532, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="ZYMFY7AuYHN6" outputId="3d59f141-bf04-4f9c-b3c3-45b841da94cd" _cell_id="TvVdZHBljFJWgSjw"
model = NonlinearNeuralNetwork()
optimizer = torch.optim.SGD(
params=model.parameters(),
lr=0.1
)
loss_op = nn.MSELoss()
loss_history = fit_model(
model=model,
optim=optimizer,
loss_op=loss_op,
X=X_data,
y=y_data,
n_iter=1000
)
plt.plot(loss_history)
plt.grid()
plt.show()
X_tnsr = torch.tensor(X_data).float()
y_pred_tnsr = model(X_tnsr)
y_pred = y_pred_tnsr.detach().numpy()
plot_model(X_data, y_pred, y_data)
# + [markdown] id="7Icqd4EP8rHG"
# Отлично! По результатам обучения видно, что модель с нелинейностью может иметь нелинейный характер и описывать нелинейные зависимости. Можете самостоятельно оценить работу по численным показателям и кроссвалидацией постараться сделать модель ещё точнее!
# + [markdown] id="foOFLOPIMv8j"
# # Нейросеть для классификации
# + [markdown] id="dDSvbB4hgd_R"
# Думаю, и так понятно, что нейросеть не ограничивается только задачей регрессии, поэтому мы зацепим ещё и работу модели для задачи классификации! Создадим данные для задачи классификации:
# + colab={"base_uri": "https://localhost:8080/", "height": 632} executionInfo={"elapsed": 6599, "status": "ok", "timestamp": 1602520184817, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="2K1Ns790My-H" outputId="de88b2c5-5140-4ed7-9035-59f8960b8a2c" _cell_id="5mCN2V5ct1oWYwbE"
from sklearn.datasets import make_moons
X_data, y_data = make_moons(
n_samples=1000,
noise=.1,
random_state=RANDOM_STATE
)
pnts_scatter = plt.scatter(X_data[:, 0], X_data[:, 1], marker='o', c=y_data, s=50, edgecolor='k')
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.grid(True)
plt.legend(handles=pnts_scatter.legend_elements()[0], labels=['0', '1'])
plt.show()
# + [markdown] id="6vo5q9Xsgi9G"
# Отлично! Данные есть, теперь можно переходить к написанию модели и обучению!
# + id="ymzjd-kkM7l4" _cell_id="DHhxkXECtM7bOF1Y"
class ClassificationNN(nn.Module):
def __init__(self):
super().__init__()
torch.manual_seed(RANDOM_STATE)
# Два признака - два входа
self.layer1 = nn.Linear(2, 2)
# Выход - степень уверенности присвоения классу 1 (так как задача бинарной классификации)
self.layer2 = nn.Linear(2, 1)
def forward(self, x):
out = torch.sigmoid(self.layer1(x))
out = self.layer2(out)
y_prob = torch.sigmoid(out)
return y_prob
# + [markdown] id="BfsnoP3EkQgB"
# Небольшая модель сделана! Обратите внимание, что у нас выход - это уже не чистое значение линейного слоя, а функция сигмоиды! Так как задача состоит в бинарной классификации, то нам достаточно сказать степень уверенности и задаться порогом, как обычно.
#
# Теперь напишем цикл обучения, чтобы обучить модель, при этом для классификации нам нужна другая функция потерь - воспользуемся [`nn.BCELoss()`](https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html):
# + id="oyCqdqq3kxbT" _cell_id="TqEwWkUuJQ8fRKMc"
# TODO - напишите цикл обучения модели
# - Создайте модель
# - Задайте оптимизатор (SGD)
# - Создайте модуль функции потерь
# - Запустите обучение через fit_model() - видали, мы даже функцию не переписывали!
# - Отобразите историю обучения
# + colab={"base_uri": "https://localhost:8080/", "height": 173} executionInfo={"elapsed": 8191, "status": "ok", "timestamp": 1602520186469, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="lW61HcH9mo5S" outputId="1af788f0-2f68-49cc-b7f0-731c0f48219c" _cell_id="cXEtexLrf4JX3agJ"
from sklearn.metrics import classification_report
def show_classification_report(model, X, y):
X_tnsr = torch.tensor(X).float()
y_pred_prob = model(X_tnsr).detach().numpy()
# Вектор степеней уверенности превращаем в бинарный вектор по порогу принятия решения
y_pred = y_pred_prob > 0.5
rep = classification_report(y, y_pred)
print(rep)
show_classification_report(model, X_data, y_data)
# + [markdown] id="TETFCFIAlGa_"
# Теперь ещё напишем метод визуализации данных, чтобы посмотреть на пространство принятия решений:
# + colab={"base_uri": "https://localhost:8080/", "height": 614} executionInfo={"elapsed": 8179, "status": "ok", "timestamp": 1602520186470, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="zgytGbA1N4Zj" outputId="24d5f795-eb8a-4707-d02e-6e168ba3a32e" _cell_id="pbOvCNPRgOAiLC72"
def plot_2d_decision_boundary(model, X, y):
x1_vals = np.linspace(X[:,0].min()-0.5, X[:,0].max()+0.5, 100)
x2_vals = np.linspace(X[:,1].min()-0.5, X[:,1].max()+0.5, 100)
xx1, xx2 = np.meshgrid(x1_vals, x2_vals)
X_tnsr = torch.tensor(np.c_[xx1.ravel(), xx2.ravel()]).float()
y_pred = model(X_tnsr).detach()
y_pred = y_pred.reshape(xx1.shape)
plt.contourf(xx1, xx2, y_pred)
pnts_scatter = plt.scatter(X[:, 0], X[:, 1], c=y, s=30, edgecolor='k')
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.grid(True)
plt.legend(handles=pnts_scatter.legend_elements()[0], labels=['0', '1'])
plt.show()
plot_2d_decision_boundary(model, X_data, y_data)
# + [markdown] id="-WSS6aA_lMjd"
# Хммм, явный клиничейский случай недообучения! Модель не может разделить столь нелинейные данные - давайте добавим модели сложности: три слоя и больше нейронов в слое!
# + id="SYLB-XHQmJT7" _cell_id="SdLszySyjUh9fwCa"
class ClassificationNNv2(nn.Module):
def __init__(self):
super().__init__()
HIDDEN = 20
torch.manual_seed(RANDOM_STATE)
self.layers = nn.Sequential(
# Обратите внимание, нелинейности могут быть и между слоями!
nn.Linear(2, HIDDEN),
nn.Sigmoid(),
nn.Linear(HIDDEN, HIDDEN),
nn.Sigmoid(),
nn.Linear(HIDDEN, 1),
)
def forward(self, x):
return torch.sigmoid(self.layers(x))
# + id="LXGPLcYRmM8e" _cell_id="rTVQ2fmXR2k8LLSI"
# TODO - обучите модель снова и посмотрите на показатели (убедитесь, что лучше не стало...)
# + [markdown] id="ICsgLKMon5ys"
# Теперь давайте попробуем поменять функции активации слоев (сигмоида выхода нужна для превращения сырых показателей logits в степень уверенности - её не трогайте) с сигмоиды на гиперболический тангенс (в английском назван tanh) - [torch.nn.Tanh()](https://pytorch.org/docs/stable/generated/torch.nn.Tanh.html). Оцените, насколько изменился характер предсказания модели!
# + id="C1jAZvown3_9" _cell_id="6Y35NxNn3HFXMLKJ"
# TODO - замените функцию активации слоев и обучите модель, сделайте выводы
class ClassificationNNv3(nn.Module):
def forward(self, x):
return torch.sigmoid(self.layers(x))
# + _cell_id="Rfc2IVMlAedsSawn"
# TEST
_test_model = ClassificationNNv3()
_test_tnsr = preprocess_vector([0.1, -0.1])
_test_pred = _test_model(_test_tnsr)
np.testing.assert_array_almost_equal(_test_pred.shape, (1, 1))
np.testing.assert_array_almost_equal(_test_pred.detach(), [[0.5198]], decimal=4)
# + _cell_id="GeaIdupVS6DIfkPx"
# TODO - а теперь обучите последнюю версию (v3) модели и посмотрите, как все поменялось, напишите выводы
# + [markdown] id="xbcTeCrXXbme"
# # Как сохранить модель?
#
# Один из актуальных вопросов - а как мне сохранить модель, чтобы сохранить результаты обучения? Мне же не придется обучать модель каждый раз заново? Ответ, конечно нет! И для этого PyTorch имеет очень простой функционал!
# + id="IYzqGY7xX3n-" _cell_id="9nT9kotwEE7LgJXt"
# Задаём путь, по которому хотим сохранить параметры модели
SAVE_PATH = 'my_model.pth'
# Вызываем функцию сохранения
# Сохраняем параметры модели!
torch.save(model.state_dict(), SAVE_PATH)
# + [markdown] id="jKZjHQ8AYPDP"
# После сохранения в файловой системе должен появиться файл с названием модели! Вот так можно в файл перенести параметры. А как загрузить их?
# + colab={"base_uri": "https://localhost:8080/", "height": 614} executionInfo={"elapsed": 8102, "status": "ok", "timestamp": 1602520186474, "user": {"displayName": "\u0410\u043b\u0435\u043a\u0441\u0435\u0439 \u0414\u0435\u0432\u044f\u0442\u043a\u0438\u043d", "photoUrl": "", "userId": "11945040185410340858"}, "user_tz": -180} id="dDS8pwxwXbTQ" outputId="2d33eabb-29e7-492e-9472-3225dbf4900e" _cell_id="TkByMEMzPzywIWQm"
loaded_state_dict = torch.load(SAVE_PATH)
model = ClassificationNNv3()
model.load_state_dict(loaded_state_dict)
plot_2d_decision_boundary(model, X_data, y_data)
# + [markdown] id="kENURkm_Yti9"
# Отлично! Мы создали новую модель, загрузили сохранённые параметры и всё работает! В этом подходе нужно учитывать следующую особенность, если поменяется архитектура модели, то параметры не смогут загрузиться. В остальном можно таким образом переносить обученную модель и использовать её где угодно (где есть код, чтобы создать объект модели).
# + [markdown] id="ZKxUIUKbZE1G"
# # Результаты
#
# Резюмируя проделанное, нейросети как модели машинного обучения можно использовать крайне гибко. С помощью них можно решать те же самые задачи, что и с помощь классических моделей.
#
# Но у нейросетей есть два наиболее важных момента и ещё несколько мелких - обсудим важные:
# - Предобработка данных - так как модель нейрона похожа на линейную модель с нелинейной функцией, то для правильной настройки весов нужно данные нормализовать (Std или MinMax), как мы это делали с линейными моделями. Это помогает модели быстрее обучаться и совершать меньше ошибок. Если этого не сделать, то вы можете увидеть, что модель учится хуже или вообще, показатель функции потерь не снижается а растёт. Такая ситуация зовётся **взрывным градиентом** и с ней надо бороться - в первую очередь, нормализацией входных данных.
# > ⚠️ Еще от взрывного градиента помогает уменьшение коэффициента обучения, но это уже совсем другая история =)
# - Архитектура - как вы, наверное, заметили, нейросети имеют в качестве гиперпараметров количество слоёв, количество нейронов в каждом слое, тип функции активации, их количество и так далее. Архитектуру можно сделать по разному, но это в практике зачастую мешает. По сравнению с лесом, где настраивается пара гиперпараметров, большие нейросети подбирать по архитектуре может быть намного дольше, чем обучить ту же модель бустинга. Поэтому, не смотрите на shallow (обычные полносвязные) нейросети как на панацею. В задачах работы с табличными данными **классические алгоритмы** как правило дают более устойчивые результаты за более короткие сроки.
# > 🔥 А вот в задачах работы с изображениями и текстом они действительно работаю лучше классических алгоритмов на ряде задач.
#
# Мы рассмотрели очень серьёзную тему, нейронные сети и применение фреймворка PyTorch! Многие вещи вам ещё предстоит изучить, но это уже хорошая база, чтобы уверенно погрузиться в тему!
#
# Сам по себе фреймворк очень мощный, поэтому мы многое узнаем из новых практик, но главное - вы всё можете узнать и попробовать сами! Главное, не бойтесь пробовать и испытывать!
#
# <p align="center"><img src="https://raw.githubusercontent.com/kail4ek/ml_edu/master/assets/go_team_go.jpg" width=300/></p>
# + [markdown] id="D4qYwuyEoqmu"
# # Выводы - задание
#
# Вопрооооосики!
#
# 1. Какие слои нейросети можно назвать "скрытыми"?
# 2. Сколько скрытых слоёв может быть в нейросети?
# 3. Можно ли использовать разные функции активации для нейронов в одном слое?
# 4. Как называется процесс, когда функция предсказания работает от слоя к слою?
# 5. Почему изначально значения весов устанавливаются случайным образом?
# 6. Зачем нужна выборка-валидация? В чём отличие от выборки-теста?
# 7. В чём разница между стохастическим градиентным спуском и полным градиентным спуском?
# 8. Что такое регуляризация? Зачем она нужна?
| notebooks/42_NeuralNetworks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tugas Menghitung Solusi Integral Eksponensial
#
# Nama : <b><NAME></b><br>
# NIM : <b>20319304</b><br>
# Mata Kuliah: <b>AS5131 Teori dan Observasi Bintang</b><br>
# ***
#
# Diketahui persamaan integral eksponensial, $E_n(x)$, untuk orde $n$ adalah sebagai berikut:
#
# $$\begin{eqnarray}
# E_n(x) = \int_1^{\infty} \frac{e^{-xw}}{w^n} dw \\
# \end{eqnarray}$$
#
# persamaan di atas bisa didekati dengan persamaan numerik berikut:
#
# $$\begin{eqnarray}
# n E_{n+1}(x) = e^{-x} - x E_n(x) \\
# \end{eqnarray}$$
#
# untuk $n = 1$, maka $E_1(x) = \infty$. Sehingga untuk $E_1(x)$, telah diberikan aproksimasi numerik oleh Abramowitz dan Stegun (1964). Untuk $x \leq 1$ adalah,
#
# $$\begin{eqnarray}
# E_1 &=& - ln x - 0.57721566 + 0.99999193x - 0.24991055x^2 \\
# && - 0.05519968x^3 - 0.00976004x^4 + 0.00107857x^5 \\
# \end{eqnarray}$$
#
# dan untuk $x > 1$ adalah,
#
# $$\begin{eqnarray}
# E_1 &=& \frac{x^4 + a_3x^3 + a_2x^2 + a_1x + a_0}{x^4 + b_3x^3 + b_2x^2 + b_1x + b_0} \frac{1}{x e^x}
# \end{eqnarray}$$
#
# parameter $a$ dan $b$ memiliki nilai sebagai berikut:
#
# $$\begin{eqnarray}
# a_3 &=& 8.5733287401, & & b_3 = 9.5733223454 \\
# a_2 &=& 18.0590169730, & & b_2 = 25.6329561486 \\
# a_1 &=& 8.6347608925, & & b_1 = 21.0996530827 \\
# a_0 &=& 0.2677737343, & & b_0 = 3.9584969228 \\
# \end{eqnarray}$$
#
# Berdasarkan beberapa persamaan di atas, solusi integral eksponensial berusaha dihitung menggunakan bahasa pemrograman python. Program yang dibangun adalah sebagai berikut:
# #### 1. Mendefinisikan modul
import math as math
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import AutoMinorLocator
# #### 2. Mendefinisikan nilai x
x = np.linspace(0., 2., 2000)
x = np.array(x)
# #### 3. Mendefinisikan fungsi yang digunakan
def exp_integral(n, x):
if x.any() <= 1.: # Untuk x <= 1.
val = (-np.log(x) - (0.57721566) + (0.99999193*x) - (0.24991055*x*x) + (0.05519968*x*x*x) - (0.00976004*x*x*x*x) + (0.00107857*x*x*x*x*x))
else: # Untuk x > 1.
a0 = 0.2677737343
a1 = 8.6347608925
a2 = 18.0590169730
a3 = 8.5733287401
b0 = 3.9584969228
b1 = 21.0996530827
b2 = 25.6329561486
b3 = 9.5733223454
val1 = ((x*x*x*x)+(a3*x*x*x)+(a2*x*x)+(a1*x)+a0)
val2 = ((x*x*x*x)+(b3*x*x*x)+(b2*x*x)+(b1*x)+b0)
val3 = x*np.exp(x)
val = val1/val2/val3
if n == 1: # Jika n = 1 -> E_1.
return val
else: # Jika n = 2, 3, ... -> E_2, E_3, ...
i = 1
while i < n:
val = (np.exp(-x)-(x*val))/i
i += 1
return val
# #### 4. Plot nilai
plt.figure(figsize=(12,8))
plt.plot(x,exp_integral(1,x), label=r'$n = 1$')
plt.plot(x,exp_integral(2,x), label=r'$n = 2$')
plt.plot(x,exp_integral(3,x), label=r'$n = 3$')
plt.xlabel(r'$x$')
plt.ylabel(r'$E_n(x)$')
plt.xlim(0., 2.)
plt.ylim(0., 2.)
plt.axes().xaxis.set_minor_locator(AutoMinorLocator())
plt.axes().yaxis.set_minor_locator(AutoMinorLocator())
plt.legend(fontsize=12.)
# Catatan: Notifikasi di atas muncul karena pada perhitungan berusaha untuk membagi bilangan dengan nilai nol. Dapat diabaikan.
| exp_integral.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # FloPy
#
# ### MT3D-USGS Example
#
# Demonstrates functionality of the flopy MT3D-USGS module using the 'Crank-Nicolson' example distributed with MT3D-USGS.
#
# #### Problem description:
#
# * Grid dimensions: 1 Layer, 3 Rows, 650 Columns
# * Stress periods: 3
# * Units are in seconds and meters
# * Flow package: UPW
# * Stress packages: SFR, GHB
# * Solvers: NWT, GCG
# +
# %matplotlib inline
import sys
import os
import platform
import string
from io import StringIO, BytesIO
import math
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
# +
modelpth = os.path.join('data')
modelname = 'CrnkNic'
mfexe = 'mfnwt'
mtexe = 'mt3dusgs'
if platform.system() == 'Windows':
mfexe += '.exe'
mtexe += '.exe'
# Make sure modelpth directory exists
if not os.path.exists(modelpth):
os.mkdir(modelpth)
# Instantiate MODFLOW object in flopy
mf = flopy.modflow.Modflow(modelname=modelname, exe_name=mfexe, model_ws=modelpth, version='mfnwt')
# -
# Set up model discretization
# +
Lx = 650.0
Ly = 15
nrow = 3
ncol = 650
nlay = 1
delr = Lx / ncol
delc = Ly / nrow
xmax = ncol * delr
ymax = nrow * delc
X, Y = np.meshgrid(np.linspace(delr / 2, xmax - delr / 2, ncol),
np.linspace(ymax - delc / 2, 0 + delc / 2, nrow))
# -
# Instantiate output control (oc) package for MODFLOW-NWT
# Output Control: Create a flopy output control object
oc = flopy.modflow.ModflowOc(mf)
# Instantiate solver package for MODFLOW-NWT
# +
# Newton-Raphson Solver: Create a flopy nwt package object
headtol = 1.0E-4
fluxtol = 5
maxiterout = 5000
thickfact = 1E-06
linmeth = 2
iprnwt = 1
ibotav = 1
nwt = flopy.modflow.ModflowNwt(mf, headtol=headtol, fluxtol=fluxtol, maxiterout=maxiterout,
thickfact=thickfact, linmeth=linmeth, iprnwt=iprnwt, ibotav=ibotav,
options='SIMPLE')
# -
# Instantiate discretization (DIS) package for MODFLOW-NWT
# +
# The equations for calculating the ground elevation in the 1 Layer CrnkNic model.
# Although Y isn't used, keeping it here for symetry
def topElev(X, Y):
return 100. - (np.ceil(X)-1) * 0.03
grndElev = topElev(X, Y)
bedRockElev = grndElev - 3.
Steady = [False, False, False]
nstp = [1, 1, 1]
tsmult = [1., 1., 1.]
# Stress periods extend from (12AM-8:29:59AM); (8:30AM-11:30:59AM); (11:31AM-23:59:59PM)
perlen = [30600, 10800, 45000]
# Create the discretization object
# itmuni = 1 (seconds); lenuni = 2 (meters)
dis = flopy.modflow.ModflowDis(mf, nlay, nrow, ncol, nper=3, delr=delr, delc=delc,
top=grndElev, botm=bedRockElev, laycbd=0, itmuni=1, lenuni=2,
steady=Steady, nstp=nstp, tsmult=tsmult, perlen=perlen)
# -
# Instantiate upstream weighting (UPW) flow package for MODFLOW-NWT
# +
# UPW parameters
# UPW must be instantiated after DIS. Otherwise, during the mf.write_input() procedures,
# flopy will crash.
laytyp = 1
layavg = 2
chani = 1.0
layvka = 1
iphdry = 0
hk = 0.1
hani = 1
vka = 1.
ss = 0.000001
sy = 0.20
hdry = -888
upw = flopy.modflow.ModflowUpw(mf, laytyp=laytyp, layavg=layavg, chani=chani, layvka=layvka,
ipakcb=53, hdry=hdry, iphdry=iphdry, hk=hk, hani=hani,
vka=vka, ss=ss, sy=sy)
# -
# Instantiate basic (BAS or BA6) package for MODFLOW-NWT
# +
# Create a flopy basic package object
def calc_strtElev(X, Y):
return 99.5 - (np.ceil(X)-1) * 0.0001
ibound = np.ones((nlay, nrow, ncol))
ibound[:,0,:] *= -1
ibound[:,2,:] *= -1
strtElev = calc_strtElev(X, Y)
bas = flopy.modflow.ModflowBas(mf, ibound=ibound, hnoflo=hdry, strt=strtElev)
# -
# Instantiate streamflow routing (SFR2) package for MODFLOW-NWT
# +
# Streamflow Routing Package: Try and set up with minimal options in use
# 9 11 IFACE # Data Set 1: ISTCB1 ISTCB2
nstrm = ncol
nss = 6
const = 1.0
dleak = 0.0001
istcb1 = -10
istcb2 = 11
isfropt = 1
segment_data = None
channel_geometry_data = None
channel_flow_data = None
dataset_5 = None
reachinput = True
# The next couple of lines set up the reach_data for the 30x100 hypothetical model.
# Will need to adjust the row based on which grid discretization we're doing.
# Ensure that the stream goes down one of the middle rows of the model.
strmBed_Elev = 98.75 - (np.ceil(X[1,:])-1) * 0.0001
s1 = 'k,i,j,iseg,ireach,rchlen,strtop,slope,strthick,strhc1\n'
iseg = 0
irch = 0
for y in range(ncol):
if y <= 37:
if iseg == 0:
irch = 1
else:
irch += 1
iseg = 1
strhc1 = 1.0e-10
elif y <= 104:
if iseg == 1:
irch = 1
else:
irch += 1
iseg = 2
strhc1 = 1.0e-10
elif y <= 280:
if iseg == 2:
irch = 1
else:
irch += 1
iseg = 3
strhc1 = 2.946219199e-6
elif y <= 432:
if iseg == 3:
irch = 1
else:
irch += 1
iseg = 4
strhc1 = 1.375079882e-6
elif y <= 618:
if iseg == 4:
irch = 1
else:
irch += 1
iseg = 5
strhc1 = 1.764700062e-6
else:
if iseg == 5:
irch = 1
else:
irch += 1
iseg = 6
strhc1 = 1e-10
# remember that lay, row, col need to be zero-based and are adjusted accordingly by flopy
# layer + row + col + iseg + irch + rchlen + strtop + slope + strthick + strmbed K
s1 += '0,{}'.format(1)
s1 += ',{}'.format(y)
s1 += ',{}'.format(iseg)
s1 += ',{}'.format(irch)
s1 += ',{}'.format(delr)
s1 += ',{}'.format(strmBed_Elev[y])
s1 += ',{}'.format(0.0001)
s1 += ',{}'.format(0.50)
s1 += ',{}\n'.format(strhc1)
if not os.path.exists('data'):
os.mkdir('data')
fpth = os.path.join('data', 's1.csv')
f = open(fpth, 'w')
f.write(s1)
f.close()
dtype = [('k', '<i4'), ('i', '<i4'), ('j', '<i4'), ('iseg', '<i4'),
('ireach', '<f8'), ('rchlen', '<f8'), ('strtop', '<f8'),
('slope', '<f8'), ('strthick', '<f8'), ('strhc1', '<f8')]
if (sys.version_info > (3, 0)):
f = open(fpth, 'rb')
else:
f = open(fpth, 'r')
reach_data = np.genfromtxt(f, delimiter=',', names=True, dtype=dtype)
f.close()
s2 = "nseg,icalc,outseg,iupseg,nstrpts, flow,runoff,etsw,pptsw, roughch, roughbk,cdpth,fdpth,awdth,bwdth,width1,width2\n \
1, 1, 2, 0, 0, 0.0125, 0.0, 0.0, 0.0, 0.082078856000, 0.082078856000, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5\n \
2, 1, 3, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.143806300000, 0.143806300000, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5\n \
3, 1, 4, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.104569661821, 0.104569661821, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5\n \
4, 1, 5, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.126990045841, 0.126990045841, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5\n \
5, 1, 6, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.183322283828, 0.183322283828, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5\n \
6, 1, 0, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.183322283828, 0.183322283828, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5"
fpth = os.path.join('data', 's2.csv')
f = open(fpth, 'w')
f.write(s2)
f.close()
if (sys.version_info > (3, 0)):
f = open(fpth, 'rb')
else:
f = open(fpth, 'r')
segment_data = np.genfromtxt(f, delimiter=',',names=True)
f.close()
# Be sure to convert segment_data to a dictionary keyed on stress period.
segment_data = np.atleast_1d(segment_data)
segment_data = {0: segment_data,
1: segment_data,
2: segment_data}
# There are 3 stress periods
dataset_5 = {0: [nss, 0, 0],
1: [nss, 0, 0],
2: [nss, 0, 0]}
sfr = flopy.modflow.ModflowSfr2(mf, nstrm=nstrm, nss=nss, const=const, dleak=dleak, isfropt=isfropt, istcb2=0,
reachinput=True, reach_data=reach_data, dataset_5=dataset_5,
segment_data=segment_data, channel_geometry_data=channel_geometry_data)
# -
# Instantiate gage package for use with MODFLOW-NWT package
gages = [[1,38,61,1],[2,67,62,1], [3,176,63,1], [4,152,64,1], [5,186,65,1], [6,31,66,1]]
files = ['CrnkNic.gage','CrnkNic.gag1','CrnkNic.gag2','CrnkNic.gag3','CrnkNic.gag4','CrnkNic.gag5',
'CrnkNic.gag6']
gage = flopy.modflow.ModflowGage(mf, numgage=6, gage_data=gages, filenames = files)
# Instantiate linkage with mass transport routing (LMT) package for MODFLOW-NWT (generates linker file)
lmt = flopy.modflow.ModflowLmt(mf, output_file_name='CrnkNic.ftl', output_file_header='extended',
output_file_format='formatted', package_flows = ['sfr'])
# Write the MODFLOW input files
# +
pth = os.getcwd()
print(pth)
mf.write_input()
# run the model
mf.run_model()
# -
# Now draft up MT3D-USGS input files.
# Instantiate MT3D-USGS object in flopy
mt = flopy.mt3d.Mt3dms(modflowmodel=mf, modelname=modelname, model_ws=modelpth,
version='mt3d-usgs', namefile_ext='mtnam', exe_name=mtexe,
ftlfilename='CrnkNic.ftl', ftlfree=True)
# Instantiate basic transport (BTN) package for MT3D-USGS
btn = flopy.mt3d.Mt3dBtn(mt, sconc=3.7, ncomp=1, prsity=0.2, cinact=-1.0,
thkmin=0.001, nprs=-1, nprobs=10, chkmas=True,
nprmas=10, dt0=180, mxstrn=2500)
# Instantiate advection (ADV) package for MT3D-USGS
adv = flopy.mt3d.Mt3dAdv(mt, mixelm=0, percel=1.00, mxpart=5000, nadvfd=1)
# Instatiate generalized conjugate gradient solver (GCG) package for MT3D-USGS
rct = flopy.mt3d.Mt3dRct(mt,isothm=0,ireact=100,igetsc=0,rc1=0.01)
gcg = flopy.mt3d.Mt3dGcg(mt, mxiter=10, iter1=50, isolve=3, ncrs=0,
accl=1, cclose=1e-6, iprgcg=1)
# Instantiate source-sink mixing (SSM) package for MT3D-USGS
# +
# For SSM, need to set the constant head boundary conditions to the ambient concentration
# for all 1,300 constant head boundary cells.
itype = flopy.mt3d.Mt3dSsm.itype_dict()
ssm_data = {}
ssm_data[0] = [(0, 0, 0, 3.7, itype['CHD'])]
ssm_data[0].append((0, 2, 0, 3.7, itype['CHD']))
for i in [0,2]:
for j in range(1, ncol):
ssm_data[0].append((0, i, j, 3.7, itype['CHD']))
ssm = flopy.mt3d.Mt3dSsm(mt, stress_period_data=ssm_data)
# -
# Instantiate streamflow transport (SFT) package for MT3D-USGS
# +
dispsf = []
for y in range(ncol):
if y <= 37:
dispsf.append(0.12)
elif y <= 104:
dispsf.append(0.15)
elif y <= 280:
dispsf.append(0.24)
elif y <= 432:
dispsf.append(0.31)
elif y <= 618:
dispsf.append(0.40)
else:
dispsf.append(0.40)
# Enter a list of the observation points
# Each observation is taken as the last reach within the first 5 segments
seg_len = np.unique(reach_data['iseg'], return_counts=True)
obs_sf = np.cumsum(seg_len[1])
obs_sf = obs_sf.tolist()
# The last reach is not an observation point, therefore drop
obs_sf.pop(-1)
# In the first and last stress periods, concentration at the headwater is 3.7
sf_stress_period_data = {0: [0, 0, 3.7],
1: [0, 0, 11.4],
2: [0, 0, 3.7]}
gage_output = [None, None, 'CrnkNic.sftobs']
sft = flopy.mt3d.Mt3dSft(mt, nsfinit=650, mxsfbc=650, icbcsf=81, ioutobs=82,
isfsolv=1, cclosesf=1.0E-6, mxitersf=10, crntsf=1.0, iprtxmd=0,
coldsf=3.7, dispsf=dispsf, nobssf=5, obs_sf=obs_sf,
sf_stress_period_data = sf_stress_period_data,
filenames=gage_output)
sft.dispsf[0].format.fortran = "(10E15.6)"
# -
# Write the MT3D-USGS input files
# +
mt.write_input()
# run the model
mt.run_model()
# -
# # Compare mt3d-usgs results to an analytical solution
# +
# Define a function to read SFT output file
def load_ts_from_SFT_output(fname, nd=1):
f=open(fname, 'r')
iline=0
lst = []
for line in f:
if line.strip().split()[0].replace(".", "", 1).isdigit():
l = line.strip().split()
t = float(l[0])
loc = int(l[1])
conc = float(l[2])
if(loc == nd):
lst.append( [t,conc] )
ts = np.array(lst)
f.close()
return ts
# Also define a function to read OTIS output file
def load_ts_from_otis(fname, iobs=1):
f = open(fname,'r')
iline = 0
lst = []
for line in f:
l = line.strip().split()
t = float(l[0])
val = float(l[iobs])
lst.append( [t, val] )
ts = np.array(lst)
f.close()
return ts
# -
# Load output from SFT as well as from the OTIS solution
# +
# Model output
fname_SFTout = os.path.join('data', 'CrnkNic.sftcobs.out')
# Loading MT3D-USGS output
ts1_mt3d = load_ts_from_SFT_output(fname_SFTout, nd=38)
ts2_mt3d = load_ts_from_SFT_output(fname_SFTout, nd=105)
ts3_mt3d = load_ts_from_SFT_output(fname_SFTout, nd=281)
ts4_mt3d = load_ts_from_SFT_output(fname_SFTout, nd=433)
ts5_mt3d = load_ts_from_SFT_output(fname_SFTout, nd=619)
# OTIS results located here
fname_OTIS = os.path.join('..', 'data', 'mt3d_test', 'mfnwt_mt3dusgs', 'sft_crnkNic', 'OTIS_solution.out')
# Loading OTIS output
ts1_Otis = load_ts_from_otis(fname_OTIS, 1)
ts2_Otis = load_ts_from_otis(fname_OTIS, 2)
ts3_Otis = load_ts_from_otis(fname_OTIS, 3)
ts4_Otis = load_ts_from_otis(fname_OTIS, 4)
ts5_Otis = load_ts_from_otis(fname_OTIS, 5)
# -
# Set up some plotting functions
# +
def set_plot_params():
import matplotlib as mpl
from matplotlib.font_manager import FontProperties
mpl.rcParams['font.sans-serif'] = 'Arial'
mpl.rcParams['font.serif'] = 'Times'
mpl.rcParams['font.cursive'] = 'Zapf Chancery'
mpl.rcParams['font.fantasy'] = 'Comic Sans MS'
mpl.rcParams['font.monospace'] = 'Courier New'
mpl.rcParams['pdf.compression'] = 0
mpl.rcParams['pdf.fonttype'] = 42
ticksize = 10
mpl.rcParams['legend.fontsize'] = 7
mpl.rcParams['axes.labelsize'] = 12
mpl.rcParams['xtick.labelsize'] = ticksize
mpl.rcParams['ytick.labelsize'] = ticksize
return
def set_sizexaxis(a,fmt,sz):
success = 0
x = a.get_xticks()
# print x
xc = np.chararray(len(x), itemsize=16)
for i in range(0,len(x)):
text = fmt % ( x[i] )
xc[i] = string.strip(string.ljust(text,16))
# print xc
a.set_xticklabels(xc, size=sz)
success = 1
return success
def set_sizeyaxis(a,fmt,sz):
success = 0
y = a.get_yticks()
# print y
yc = np.chararray(len(y), itemsize=16)
for i in range(0,len(y)):
text = fmt % ( y[i] )
yc[i] = string.strip(string.ljust(text,16))
# print yc
a.set_yticklabels(yc, size=sz)
success = 1
return success
# -
# Compare output:
# +
#set up figure
try:
plt.close('all')
except:
pass
set_plot_params()
fig = plt.figure(figsize=(6, 4), facecolor='w')
ax = fig.add_subplot(1, 1, 1)
ax.plot(ts1_Otis[:,0], ts1_Otis[:,1], 'k-', linewidth=1.0)
ax.plot(ts2_Otis[:,0], ts2_Otis[:,1], 'b-', linewidth=1.0)
ax.plot(ts3_Otis[:,0], ts3_Otis[:,1], 'r-', linewidth=1.0)
ax.plot(ts4_Otis[:,0], ts4_Otis[:,1], 'g-', linewidth=1.0)
ax.plot(ts5_Otis[:,0], ts5_Otis[:,1], 'c-', linewidth=1.0)
ax.plot((ts1_mt3d[:,0])/3600, ts1_mt3d[:,1], 'kD', markersize=2.0, mfc='none',mec='k')
ax.plot((ts2_mt3d[:,0])/3600, ts2_mt3d[:,1], 'b*', markersize=3.0, mfc='none',mec='b')
ax.plot((ts3_mt3d[:,0])/3600, ts3_mt3d[:,1], 'r+', markersize=3.0)
ax.plot((ts4_mt3d[:,0])/3600, ts4_mt3d[:,1], 'g^', markersize=2.0, mfc='none',mec='g')
ax.plot((ts5_mt3d[:,0])/3600, ts5_mt3d[:,1], 'co', markersize=2.0, mfc='none',mec='c')
#customize plot
ax.set_xlabel('Time, hours')
ax.set_ylabel('Concentration, mg L-1')
ax.set_ylim([3.5,13])
ticksize = 10
#legend
leg = ax.legend(
(
'Otis, Site 1', 'Otis, Site 2', 'Otis, Site 3', 'Otis, Site 4', 'Otis, Site 5',
'MT3D-USGS, Site 1', 'MT3D-USGS, Site 2', 'MT3D-USGS, Site 3', 'MT3D-USGS, Site 4', 'MT3D-USGS, Site 5',
),
loc='upper right', labelspacing=0.25, columnspacing=1,
handletextpad=0.5, handlelength=2.0, numpoints=1)
leg._drawFrame = False
plt.show()
# -
| examples/Notebooks/flopy3_MT3D-USGS_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Day 6 : Data Loading and Manipulation and Visulatiozation
# ### You can use the following liberaries for your assignment:
# > Numpy, Pandas, Matplotlib, Seaborn, LASIO, Welly
# ## Kindly load the las file of Exported.csv file from the data folder
# ## Perform the below Tasks:
#
# >1. Investigate the component of the data file (number of columns , numbers of observations, Null values, normal statistics)
# 2. Plot the null values as bars
# 3. Create a copy of the data frame and drop the NAN values
# 3. Use the other copy to fill-in NAN values.
# 4. Which option do you prefer to work with regarding the relationship with PHIE and DT or PHIE and RHOB
#
#
# !pip install lasio
import lasio
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
F02_1 = lasio.read(r"F:\GeoML-2\GeoML-2.0\10DaysChallenge\Dutch_F3_Logs\F02-1_logs.las")
df = F02_1.df()
df.info()
df.describe()
df.isnull().sum().plot(kind = 'bar', title = 'Missing Values Comparison', figsize=(20,6))
df_clean=df.copy()
print('original df =',df_clean.shape)
df_clean.dropna(inplace=True)
print('after removing nans =',df_clean.shape)
print('=' *100)
print('This dataset has {0} rows and {1} columns'.format(df_clean.shape[0],df_clean.shape[1]))
df_1=df
df_mean=df_1.copy()
df_mean['RHOB']=df_mean['RHOB'].fillna(df_mean['RHOB'].mean())
df_mean['GR']=df_mean['GR'].fillna(df_mean['GR'].mean())
df_mean['PHIE']=df_mean['PHIE'].fillna(df_mean['PHIE'].mean())
display('after filling nans',df_mean)
# +
plt.figure(figsize=(10,5))
plt.scatter(df_clean.DT, df_clean.PHIE)
plt.title('PHIE vs DT plot ')
plt.xlabel('DT')
plt.ylabel('PHIE')
plt.grid()
plt.show();
plt.figure(figsize=(10,5))
plt.scatter(df_clean.RHOB, df_clean.PHIE)
plt.title('PHIE vs RHOB plot ')
plt.xlabel('RHOB')
plt.ylabel('PHIE')
plt.grid()
plt.show();
# +
plt.figure(figsize=(10,5))
plt.scatter(df_mean.DT, df_mean.PHIE)
plt.title('PHIE vs DT plot ')
plt.xlabel('DT')
plt.ylabel('PHIE')
plt.grid()
plt.show();
plt.figure(figsize=(10,5))
plt.scatter(df_mean.RHOB, df_mean.PHIE)
plt.title('PHIE vs RHOB plot ')
plt.xlabel('RHOB')
plt.ylabel('PHIE')
plt.grid()
plt.show();
# +
#i prefer to work with clean data without null vlaues
# -
| Day6 of 10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Advanced Selections and Analysis
# There are lots of tools in `isambard` for analysing protein structure, most of them are baked into the AMPAL objects themselves.
# ## 1. Selecting Secondary Structure
# Let's load-up a PDB file:
import isambard
my_protein_3qy1 = isambard.ampal.convert_pdb_to_ampal('3qy1.pdb')
# You can select all the helices or strands simply by typing:
my_protein_3qy1.helices
my_protein_3qy1.strands
# These attributes return `Assembly` objects, where each of the helices/strands are represented as a separate `Polypeptide`. All of the normal attributes and methods associated with `Assembly` and `Polypeptide` objects work just the same as before.
my_helices_3qy1 = my_protein_3qy1.helices
my_helices_3qy1.sequences
my_helices_3qy1[0]
my_helices_3qy1[0].sequence
my_helices_3qy1[0].molecular_weight
my_helices_3qy1[0][15]
my_helices_3qy1[0][15]['CA']
# It is worth noting that the `ampal_parent` attribute of the individual helix refers to the `Polypeptide` to which it belongs.
my_helix = my_helices_3qy1[2]
my_helix.ampal_parent
# To get back to the original `Assembly`, therefore, we call `ampal_parent` twice.
my_helix.ampal_parent.ampal_parent
my_helix.ampal_parent == my_helices_3qy1
my_helix.ampal_parent.ampal_parent == my_protein_3qy1
# ## 2. Selecting All Residues or Atoms
# Sometimes it's convinient to select all of the `Residues` or `Atoms` in an `Assembly` or `Polypeptide` object:
my_protein_3qy1.get_monomers()
my_helices_3qy1.get_atoms()
# As you can see an itertools object is returned. This might be slightly confusing as you might expect a list, but this is what's known as an `iterator`, you can loop over it like a list or string, but if you want to use it repeatedly or examine its contents you'll need to convert it to a list. The advantage of returning an `iterator` is that it's much more memory efficient, and these lists could potentially be very large. If you'd like to know more about `iterables` and `iterators`, as well as related objects called `generators`, see the following link (this is quite advanced Python and is not essential for using any of the AMPAL framework: [Iterators and Generators](http://anandology.com/python-practice-book/iterators.html).
list(my_protein_3qy1.get_atoms())[:20] # Showing the first 20 elements for clarity
list(my_protein_3qy1.get_monomers())[:20] # Showing the first 20 elements for clarity
# ## 3. Analysing Composition
# We can easily look at the composition of sequences and structures using a `Counter` object. `Counter` can be fed any iterable (`lists` and `strings` are the most commonly used) and will count the occurence of each element inside. We can start by looking at the composition of amino acids in a sequence:
from collections import Counter
my_protein_3qy1['A'].sequence
Counter(my_protein_3qy1['A'].sequence)
# But as stated before, you can use Counters on any iterable, not just strings. Let's make a list of all the pdb molecule codes of the ligands:
my_ligands = my_protein_3qy1.get_ligands()
my_ligands[0]
my_ligands[0].mol_code # This can be used to find the pdb molecule code of any residue
# There are two ways to generate a list of the codes, a `for` loop or a `list` comprehension, use whichever you are comfortable with. If you'd like to know more about `list` comprehensions, please see the following link (this is relatively advanced Python but while not essential for using any of the AMPAL framework, it is very useful: [List Comprehensions](https://docs.python.org/3.5/tutorial/datastructures.html) (scroll down to the relevant section).
# With a for loop
mol_codes_1 = []
for lig in my_ligands:
mol_codes_1.append(lig.mol_code)
mol_codes_1[:5] # The first 5 elements
# A list comprehension
mol_codes_2 = [lig.mol_code for lig in my_ligands]
mol_codes_2[:5] # Showing the first 5 elements for clarity
# You can use either of these methods, use whichever one you're more comfortable with.
mol_codes_1 == mol_codes_2 # The lists that re produced are exactly the same
# Now the `list` of mol codes can be used to make a `Counter` object:
Counter(mol_codes_1)
# As you can see, there are 447 water molecules and 2 zinc ions.
# ## 4. Distance Analysis
# Now we can select all atoms in the protein and understand the structures composition, we can perform some simple analysis. Let's try and find all the residues that are close to the zinc ions.
zinc_1 = my_ligands[0]
zinc_1
# All `Ligand` objects are `Monomers`, even if they only contain a single atom. So we use the zinc `Atom` itself when measuring distances:
zinc_1['ZN']
# Measuring distances is simple, you can use the `isambard.geometry.distance` function. It takes two 3D vectors as an input, these can be in list form, tuples, or even `Atom` objects:
isambard.geometry.distance(zinc_1['ZN'], (0, 0, 0)) # Distance from the origin
first_ca = my_protein_3qy1['A'][0]['CA'] # CA of the first residue in chain A
first_ca
isambard.geometry.distance(zinc_1['ZN'], first_ca) # Distance in angstroms
# Now we need to loop over all the atoms and find which are close (<= 3 Å) to the zinc. We can use the distance function in geometry to do this:
atoms_close_to_zinc = []
for at in my_protein_3qy1.get_atoms():
if isambard.geometry.distance(zinc_1['ZN'], at) <= 3.0:
atoms_close_to_zinc.append(at)
atoms_close_to_zinc
# There are 5 atoms within 3 Å of the zinc, *including* the zinc atom itself. One way to get ignore this atom in the above block of code is to use the `ligands=False` flag in `get_atoms()`:
atoms_close_to_zinc = []
for at in my_protein_3qy1.get_atoms(ligands=False):
if isambard.geometry.distance(zinc_1['ZN'], at) <= 3.0:
atoms_close_to_zinc.append(at)
atoms_close_to_zinc
# Now there are 4 atoms within 3 Å of the zinc, 2 sulphur atoms, an oxygen and a nitrogen. Let's find the residues that are coordinating the zinc:
atoms_close_to_zinc[0]
atoms_close_to_zinc[0].ampal_parent
# We can get all the residues using a for loop or a list comprehension:
residues_close_to_zinc = []
for at in atoms_close_to_zinc:
residues_close_to_zinc.append(at.ampal_parent)
residues_close_to_zinc
# The list comprehension is much more concise
residues_close_to_zinc_2 = [at.ampal_parent for at in atoms_close_to_zinc]
residues_close_to_zinc_2
# It looks like the zinc is coordinated by two cysteines, an aspartate and a histidine residue.
# ## 5. Is Within
# This kind of operation is very common when analysing proteins. So we have some built-in methods for handling this on `Assembly` and `Polymer` objects:
my_protein_3qy1.is_within(3, zinc_1['ZN']) # It takes a distance and a point
my_protein_3qy1.is_within(3, (-10, -10, -10))
my_protein_3qy1['A'].is_within(3, zinc_1['ZN'])
my_protein_3qy1['B'].is_within(3, zinc_1['ZN'])
# This list is empty as nothing on chain B is close to the zinc
# There is a partner method to is `is_within`, every `Monomer` (this includes `Residues` and `Ligands`) has an `close_monomers` method. This returns all `Monomers` within a given cutoff value.
zinc_1.close_monomers(my_protein_3qy1) # default cutoff is 4.0 Å
zinc_1.close_monomers(my_protein_3qy1, cutoff=6)
zinc_1.close_monomers(my_helices_3qy1, cutoff=1) # Nothing is closer than 1 Å from the zinc
# ## 6. Geometry in ISAMBARD
# There are a range of tools in ISAMBARD for performing geometric operations. We've already covered distance, but other commonly used functions include `angle_between_vectors`, `dihedral`, `unit_vector`, `find_foot`, `radius_of_circumcircle`. Be sure to check out the source code if you need a specific geometric function or have a look through the documentation.
# The `dihedral` function is probably the most useful of these for analysing proteins, so let's use it to measure some torsion angles. It requires 4 3D vectors to calculate the dihedral, again these can be `lists`, `tuples`, `numpy.arrays` or `Atoms`. **Note:** This method of calculating torsion angles is only as an example, see the Tagging tutorial for the proper, low-effort method!
r1 = my_protein_3qy1['B'][4]
r2 = my_protein_3qy1['B'][5]
r3 = my_protein_3qy1['B'][6]
omega = isambard.geometry.dihedral(r1['CA'], r1['C'], r2['N'], r2['CA'])
phi = isambard.geometry.dihedral(r1['C'], r2['N'], r2['CA'], r2['C'])
psi = isambard.geometry.dihedral(r2['N'], r2['CA'], r2['C'], r3['N'])
print(omega, phi, psi)
# We can use it to calculate the $\chi$ torsion angles too. R2 is leucine, so we can calculate the $\chi_1$ and $\chi_2$ angles:
r2.atoms
chi1 = isambard.geometry.dihedral(r2['N'], r2['CA'], r2['CB'], r2['CG'])
chi2 = isambard.geometry.dihedral(r2['CA'], r2['CB'], r2['CG'], r2['CD1'])
print(chi1, chi2)
# Our simple analysis shows that the leucine residue is in the gauche<sup>-</sup>/trans conformation.
# ## 7. Summary and activities
# There are lots of tools for making complex selections in ISAMBARD. Combined with the tools for geometry, detailed analysis can be performed on these selections.
# 1. Find all the residues that are:
# 1. within 5 Å of crystal water.
# 1. *not* within 5 Å of crystal water.
# 1. Find how many cis-peptide bonds there are in this structure.
# 1. Perform these activities on another PDB file.
| tutorial_notebooks/2_Advanced_Selections_and_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Distracted-Driver-Detection
# ### Problem Description
# In this competition you are given driver images, each taken in a car with a driver doing something in the car (texting, eating, talking on the phone, makeup, reaching behind, etc). Your goal is to predict the likelihood of what the driver is doing in each picture.
#
# The 10 classes to predict are as follows,
# <br>
# <br>
# <table>
# <tr>
# <td>
# <li>c0: safe driving</li>
# <br>
# <li>c1: texting - right</li>
# <br>
# <li>c2: talking on the phone - right</li>
# <br>
# <li>c3: texting - left</li>
# <br>
# <li>c4: talking on the phone - left</li>
# <br>
# <li>c5: operating the radio</li>
# <br>
# <li>c6: drinking</li>
# <br>
# <li>c7: reaching behind</li>
# <br>
# <li>c8: hair and makeup</li>
# <br>
# <li>c9: talking to passenger</li>
# </td>
# <td>
# <img src="./supp/driver.gif" style="width:300;height:300px;">
# </td>
# </tr>
#
# </table>
#
# ### Summary of Results
# Using a 50-layer Residual Network (with the following parameters) the following scores (losses) were obtained.
# <table>
# <li>10 Epochs</li>
# <li>32 Batch Size</li>
# <li>Adam Optimizer</li>
# <li>Glorot Uniform Initializer</li>
# <tr>
# <td>
# **Training Loss**
# </td>
# <td>
# 0.93
# </td>
# </tr>
# <tr>
# <td>
# **Validation Loss**
# </td>
# <td>
# 3.79
# </td>
# </tr>
# <tr>
# <td>
# **Holdout Loss**
# </td>
# <td>
# 2.64
# </td>
# </tr>
# </table>
#
# **Why the high losses? Simply put - we don't have enough resources to quickly iterate / hyper-parameter tune the model!** If more resources were available (RAM, CPU speed), we could hyper-parameter tune over grid searches and combat high bias / high variance, which this model currently suffers. [This is how you'd fix high bias/variance.](#improve)
#
#
# ### Import Dependencies and Define Functions
# Let's begin by importing some useful dependencies and defining some key functions that we'll use throughout the notebook.
# +
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from keras import layers
from keras.layers import (Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization,
Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D)
from keras.wrappers.scikit_learn import KerasClassifier
from keras.models import Model, load_model, save_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
# %matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
from sklearn.model_selection import StratifiedKFold, cross_validate, LeaveOneGroupOut
from PIL import Image
# +
def PlotClassFrequency(class_counts):
plt.figure(figsize=(15,4))
plt.bar(class_counts.index,class_counts)
plt.xlabel('class')
plt.xticks(np.arange(0, 10, 1.0))
plt.ylabel('count')
plt.title('Number of Images per Class')
plt.show()
def DescribeImageData(data):
print('Average number of images: ' + str(np.mean(data)))
print("Lowest image count: {}. At: {}".format(data.min(), data.idxmin()))
print("Highest image count: {}. At: {}".format(data.max(), data.idxmax()))
print(data.describe())
def CreateImgArray(height, width, channel, data, folder, save_labels = True):
"""
Writes image files found in 'imgs/train' to array of shape
[examples, height, width, channel]
Arguments:
height -- integer, height in pixels
width -- integer, width in pixels
channel -- integer, number of channels (or dimensions) for image (3 for RGB)
data -- dataframe, containing associated image properties, such as:
subject -> string, alpha-numeric code of participant in image
classname -> string, the class name i.e. 'c0', 'c1', etc.
img -> string, image name
folder -- string, either 'test' or 'train' folder containing the images
save_labels -- bool, True if labels should be saved, or False (just save 'X' images array).
Note: only applies if using train folder
Returns:
.npy file -- file, contains the associated conversion of images to numerical values for processing
"""
num_examples = len(data)
X = np.zeros((num_examples,height,width,channel))
if (folder == 'train') & (save_labels == True):
Y = np.zeros(num_examples)
for m in range(num_examples):
current_img = data.img[m]
img_path = 'imgs/' + folder + '/' + current_img
img = image.load_img(img_path, target_size=(height, width))
x = image.img_to_array(img)
x = preprocess_input(x)
X[m] = x
if (folder == 'train') & (save_labels == True):
Y[m] = data.loc[data['img'] == current_img, 'classname'].iloc[0]
np.save('X_'+ folder + '_' + str(height) + '_' + str(width), X)
if (folder == 'train') & (save_labels == True):
np.save('Y_'+ folder + '_' + str(height) + '_' + str(width), Y)
def Rescale(X):
return (1/(2*np.max(X))) * X + 0.5
def PrintImage(X_scaled, index, Y = None):
plt.imshow(X_scaled[index])
if Y is not None:
if Y.shape[1] == 1:
print ("y = " + str(np.squeeze(Y[index])))
else:
print("y = " + str(np.argmax(Y[index])))
def LOGO(X, Y, group, model_name, input_shape, classes, init, optimizer, metrics, epochs, batch_size):
logo = LeaveOneGroupOut()
logo.get_n_splits(X, Y, group);
cvscores = np.zeros((26,4))
subject_id = []
i = 0
for train, test in logo.split(X, Y, group):
# Create model
model = model_name(input_shape = input_shape, classes = classes, init = init)
# Compile the model
model.compile(optimizer = optimizer, loss='sparse_categorical_crossentropy', metrics=[metrics])
# Fit the model
model.fit(X[train], Y[train], epochs = epochs, batch_size = batch_size, verbose = 0)
# Evaluate the model
scores_train = model.evaluate(X[train], Y[train], verbose = 0)
scores_test = model.evaluate(X[test], Y[test], verbose = 0)
# Save to cvscores
cvscores[i] = [scores_train[0], scores_train[1] * 100, scores_test[0], scores_test[1] * 100]
subject_id.append(group.iloc[test[0]])
# Clear session
K.clear_session()
# Update counter
i += 1
return pd.DataFrame(cvscores, index = subject_id, columns=['Train_loss', 'Train_acc','Test_loss', 'Test_acc'])
# -
# ### Quick EDA
# Let's begin by loading the provided dataset 'driver_imgs_list' doing a quick analysis.
driver_imgs_df = pd.read_csv('driver_imgs_list/driver_imgs_list.csv')
driver_imgs_df.head()
# We can note the number of examples by printing the shape of the dataframe. Looks like the training set has 22,424 images.
driver_imgs_df.shape
# We can plot the number of images per class to see if any classes have a low number of images.
class_counts = (driver_imgs_df.classname).value_counts()
PlotClassFrequency(class_counts)
DescribeImageData(class_counts)
# Additionally, we can plot the number of images per test subject. It would be much more helpful to plot the number of images belonging to each class *per subject*. We could then ensure that the distribution is somewhat uniform. We did not show this here, and instead just plotted number of images per subject.
subject_counts = (driver_imgs_df.subject).value_counts()
plt.figure(figsize=(15,4))
plt.bar(subject_counts.index,subject_counts)
plt.xlabel('subject')
plt.ylabel('count')
plt.title('Number of Images per Subject')
plt.show()
DescribeImageData(subject_counts)
# Furthermore, we can check if there are any null image examples.
pd.isnull(driver_imgs_df).sum()
# ### Preprocess Data
# The data was provided with the classes in order (from class 0 to class 9). Let's shuffle the data by permutating the 'classname' and 'img' attributes.
np.random.seed(0)
myarray = np.random.permutation(driver_imgs_df)
driver_imgs_df = pd.DataFrame(data = myarray, columns=['subject', 'classname', 'img'])
# We'll go ahead and apply a dictionary to the 'classname' attribute and assign the strings to their respective integers.
d = {'c0': 0, 'c1': 1, 'c2': 2, 'c3': 3, 'c4': 4, 'c5': 5, 'c6': 6, 'c7': 7, 'c8': 8, 'c9': 9}
driver_imgs_df.classname = driver_imgs_df.classname.map(d)
# ### Convert Dataframe to Array for Training
# Let's convert the images into numerical arrays of dimension '64, 64, 3'. Both the height and width of the images will be 64 pixels, and each image will have 3 channels (for red, green and blue). The following function saves the array as a .npy file.
CreateImgArray(64, 64, 3, driver_imgs_df, 'train')
# Let's now load the new image arrays into the environment. Note that this step is used to save memory so that CreateImgArray does not have to be executed every time.
X = np.load('X_train_64_64.npy')
X.shape
Y = np.load('Y_train_64_64.npy')
Y.shape
# Let's check our new arrays and ensure we compiled everything correctly. We can see that we do not have any entries in X that contain zero, and Y contains all the target labels.
(X == 0).sum()
PlotClassFrequency(pd.DataFrame(Y)[0].value_counts())
# Furthermore, we can print the images from X and the associated class as a sanity check. Re-scaling the images (between 0 and 1):
X_scaled = Rescale(X)
PrintImage(X_scaled, 2, Y = Y.reshape(-1,1))
# Class of "7" corresponds to a driver "reaching behind", which appears to be the case shown above.
# ### Build the Model
# We'll use the popular Residual Net with 50 layers. Residual networks are essential to preventing vanishing gradients when using a rather 'deep' network (many layers). The identity_block and convolutional_block are defined below.
def identity_block(X, f, filters, stage, block, init):
"""
Implementation of the identity block as defined in Figure 3
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = init)(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = init)(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = init)(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X,X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
def convolutional_block(X, f, filters, stage, block, init, s = 2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = init)(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(F2, (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = init)(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(F3, (1, 1), strides = (1,1), name = conv_name_base + '2c', kernel_initializer = init)(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = Conv2D(F3, (1, 1), strides = (s,s), name = conv_name_base + '1', kernel_initializer = init)(X_shortcut)
X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X,X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
# With the two blocks defined, we'll now create the model ResNet50, as shown below.
def ResNet50(input_shape = (64, 64, 3), classes = 10, init = glorot_uniform(seed=0)):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = init)(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1, init = init)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b', init = init)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c', init = init)
### START CODE HERE ###
# Stage 3 (≈4 lines)
X = convolutional_block(X, f = 3, filters = [128,128,512], stage = 3, block='a', s = 2, init = init)
X = identity_block(X, 3, [128,128,512], stage=3, block='b', init = init)
X = identity_block(X, 3, [128,128,512], stage=3, block='c', init = init)
X = identity_block(X, 3, [128,128,512], stage=3, block='d', init = init)
# Stage 4 (≈6 lines)
X = convolutional_block(X, f = 3, filters = [256, 256, 1024], stage = 4, block='a', s = 2, init = init)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b', init = init)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c', init = init)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d', init = init)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e', init = init)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f', init = init)
# Stage 5 (≈3 lines)
X = convolutional_block(X, f = 3, filters = [512, 512, 2048], stage = 5, block='a', s = 2, init = init)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b', init = init)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c', init = init)
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D(pool_size=(2, 2), name = 'avg_pool')(X)
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = init)(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
# ### Cross Validation Training (Leave-One-Group-Out)
# Let's do some basic transformation on the training / label arrays, and print the shapes. After, we'll define some key functions for use in our first CNN model.
# +
# Normalize image vectors
X_train = X/255
# Convert training and test labels to one hot matrices
#Y = convert_to_one_hot(Y.astype(int), 10).T
Y_train = np.expand_dims(Y.astype(int), -1)
print ("number of training examples = " + str(X_train.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
# -
# Next, let's call our LOGO function that incorporates the Leave One Group Out cross-validator. This function will allow us to split the data using the drivers ('subject') as the group, which should help us prevent overfitting as the model will probably learn too much information off the type of driver/subject and become biased.
#
# Below we pass the arguments to the self-defined LOGO function and execute. The return is a dataframe consistering of the accuracy/loss scores of the training/dev sets (for each group/driver).
scores = LOGO(X_train, Y_train, group = driver_imgs_df['subject'],
model_name = ResNet50, input_shape = (64, 64, 3), classes = 10,
init = glorot_uniform(seed=0), optimizer = 'adam', metrics = 'accuracy',
epochs = 2, batch_size = 32)
# Plotting the dev set accuracy, we can see that 'p081' had the lowest accuracy at 8.07%, and 'p002' had the highest accuracy at 71.52%.
plt.figure(figsize=(15,4))
plt.bar(scores.index, scores.loc[:,'Test_acc'].sort_values(ascending=False))
plt.yticks(np.arange(0, 110, 10.0))
plt.show()
# Calling 'describe' method, we can note some useful statistics.
scores.describe()
# And finally, let's print out the train/dev scores.
print("Train acc: {:.2f}. Dev. acc: {:.2f}".format(scores['Train_acc'].mean(), scores['Test_acc'].mean()))
print("Train loss: {:.2f}. Dev. loss: {:.2f}".format(scores['Train_loss'].mean(), scores['Test_loss'].mean()))
# We can note that the train accuracy is higher than the dev accuracy, which is expected. The accuracy is quite low in comparison to our assumed Bayes accuracy of 100% (using human accuracy as a proxy to Bayes), and we have some variance (differnce between train and dev) of about 6.72%. Let's try increasing the number of epochs to 10 and observe if the train/dev accuracies increase (loss decreases).
scores = LOGO(X_train, Y_train, group = driver_imgs_df['subject'],
model_name = ResNet50, input_shape = (64, 64, 3), classes = 10,
init = glorot_uniform(seed=0), optimizer = 'adam', metrics = 'accuracy',
epochs = 5, batch_size = 32)
print("Train acc: {:.2f}. Dev. acc: {:.2f}".format(scores['Train_acc'].mean(), scores['Test_acc'].mean()))
print("Train loss: {:.2f}. Dev. loss: {:.2f}".format(scores['Train_loss'].mean(), scores['Test_loss'].mean()))
# <a class="anchor" id="improve"></a>
# The train and dev accuracy increased to 37.83% and 25.79%, respectively. We can note that we still have an underfitting problem (high bias, about 62.17% from 100%), *however, our variance has increased dramatically between 2 epochs and 5 by about 80% (12.04% variance)!* Not only do **we have high bias, but our model also exhibits high variance**. In order to tackle this, we'll need to address the high bias first (get as close to Bayes error as possible) and then deal with the resulting high variance. Note that ALL of the steps below should be performed with LOGO cross-validation. This way, we can be sure our estimates of the dev set are in line with the holdout set.
#
# In order to tackle **high bias**, we can do any of the following:
# <li>run more epochs</li>
# <li>increase the batch size (up to number of examples)</li>
# <li>make a deeper network</li>
# <li>increases the image size from 64x64 to 128x128, 256x256, etc.</li>
# <li>GridSearching over params (batch size, epoch, optimizer and it's parameters, initializer)</li>
#
# Let's up the epoch count to 10. The assumption is that the train accuracy will be higher than the previous 5 epoch model, but our variance will increase.
scores = LOGO(X_train, Y_train, group = driver_imgs_df['subject'],
model_name = ResNet50, input_shape = (64, 64, 3), classes = 10,
init = glorot_uniform(seed=0), optimizer = 'adam', metrics = 'accuracy',
epochs = 10, batch_size = 32)
print("Train acc: {:.2f}. Dev. acc: {:.2f}".format(scores['Train_acc'].mean(), scores['Test_acc'].mean()))
print("Train loss: {:.2f}. Dev. loss: {:.2f}".format(scores['Train_loss'].mean(), scores['Test_loss'].mean()))
# As expected, the training accuracy increased to 86.95%, but the variance increase from 5 epochs to 10 was about 284% (46.27% variance)! Thus, we can conclude that this model suffers from severe high variance. We can continue on and use the steps above to fix the remaining bias, then we can use the steps below to reduce the variance.
# In order to tackle **high variance**, we can do any of the following:
# <li>Augment images to increase sample size</li>
# <li>Regularization</li>
# <li>GridSearching over params (batch size, epoch, optimizer and it's parameters, initializer)</li>
# <li>Decrease dev set size (allows more examples to be trained, making model less prone to overfitting)</li>
# <li>Investigate classes with low accuracy, and fix them</li>
# <table>
# <tr>
# <td>
# **Model**
# </td>
# <td>
# **Epoch**
# </td>
# <td>
# **Train Accuracy**
# </td>
# <td>
# **Dev Accuracy**
# </td>
# <td>
# **Bias**
# </td>
# <td>
# **Variance**
# </td>
# </tr>
# <tr>
# <td>
# **Model A**
# </td>
# <td>
# 2
# </td>
# <td>
# 27.91
# </td>
# <td>
# 21.19
# </td>
# <td>
# 72.09
# </td>
# <td>
# 6.72
# </td>
# </tr>
# <tr>
# <td>
# **Model B**
# </td>
# <td>
# 5
# </td>
# <td>
# 37.83
# </td>
# <td>
# 25.79
# </td>
# <td>
# 62.17
# </td>
# <td>
# 12.04
# </td>
# </tr>
# <tr>
# <td>
# **Model C**
# </td>
# <td>
# 10
# </td>
# <td>
# 86.95
# </td>
# <td>
# 40.68
# </td>
# <td>
# 13.06
# </td>
# <td>
# 46.27
# </td>
# </tr>
# </table>
#
# ### Predictions on the Holdout Set
# We'll go ahead and fit the 10 epoch model We'd like to confirm that our holdout score is somewhere around the dev score, so that we are not being particularily biased in our dev set, either. Creating, compiling and fitting the model,
model = ResNet50(input_shape = (64, 64, 3), classes = 10)
model.compile(optimizer = 'adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, Y_train, epochs = 10, batch_size = 32)
save_model(model, 'e10.h5');
model = load_model('e10.h5')
# Let's load the holdout data set from out 'test_file_names' csv file and then create the necessary array.
holdout_imgs_df = pd.read_csv('test_file_names.csv')
holdout_imgs_df.rename(columns={"imagename": "img"}, inplace = True)
CreateImgArray(64, 64, 3, holdout_imgs_df, 'test')
# Again, we'll load the data here instead of having to run CreateImgArray repeatedly.
X_holdout = np.load('X_test_64_64.npy')
X_holdout.shape
# And now calling predictions on the holdout set, as shown below. MAKE SURE to clear the memory before this step!
probabilities = model.predict(X_holdout, batch_size = 32)
# Saving the predictions to a .csv file for submission,
np.savetxt("test_results.csv", probabilities, delimiter=",")
#
# If desired (as a sanity check) we can visually check our predictions by scaling the X_holdout array and then printing the image.
X_holdout_scaled = Rescale(X_holdout)
index = 50000
PrintImage(X_holdout_scaled, index = index, Y = probabilities)
print('y_pred = ' + str(probabilities[index].argmax()))
| Distrated Driver detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import psycopg2
import pandas as pd
import json
from tqdm import tqdm_notebook
with open('config.json') as f:
configDict = json.load(f)
with open('config_queries.json') as f2:
configQueries = json.load(f2)
inserytQuery = configQueries['insertQuery']
user = configDict["user"]
password = configDict['password']
storage = configDict['storageip']
port = configDict['port']
database = configDict['database']
connection = psycopg2.connect(user = user,
password = password,
host = storage,
port = port,
database = database)
# +
cur = connection.cursor()
cur.execute('select * from documents')
r = cur.fetchall()
# -
r
df = pd.read_pickle('../data/arxiv-last50years-data.pickle')
df.head()
df.shape
# +
for i in tqdm_notebook(df.index):
cur.execute(inserytQuery, [str(df.id[i]), str(df.submitter[i]),
str(df.title[i]),
str(df.categories[i]),
str(df.abstract[i]), str(df.update_date[i]), str(df.authors_parsed[i]), int(df.year[i]),
str(df.pdf_link[i]), str(df.pages[i]), 'pipeline'
])
if i%100==0:
connection.commit()
connection.commit()
# -
| util/pushData.ipynb |
/ ---
/ jupyter:
/ jupytext:
/ text_representation:
/ extension: .q
/ format_name: light
/ format_version: '1.5'
/ jupytext_version: 1.14.4
/ ---
/ + [markdown] cell_id="00000-e8616331-6ad7-4760-80f0-31f30bca28bc" deepnote_cell_type="markdown" deepnote_to_be_reexecuted=true source_hash="7993fadf" tags=[]
/ # UMichigan Applied Plotting, Week 4 Assignment
/ + cell_id="00001-7176a3b2-dd42-48ed-b3fb-8e95392a2461" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=1345 execution_start=1617056859392 source_hash="13cacd7e" tags=[]
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sbn
import pathlib as path
import requests as r
/ %matplotlib widget
/ + cell_id="00003-a168d195-ac36-48b4-b75c-67587ac14c2b" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=544 execution_start=1617057353910 source_hash="b5681ff4" tags=[]
urls = []
# Check if data files exist
datafiles = ["aa_population.csv", "aa_income.csv", "aa_prices.csv"]
datapaths = [path.Path(onefile) for onefile in datafiles]
available = [item.exists() for item in datapaths]
files = {x: y for x in datapaths for y in available}
print(truth[])
for dfile in datapaths:
if not available:
print("Dangit")
/ + cell_id="00003-13212898-3c0c-4110-b9b5-c04fb89c35f6" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=13 execution_start=1617057045115 source_hash="59b36359" tags=[]
print(available)
/ + cell_id="00004-d21cf354-d4eb-4293-9324-e49b34d12f53" deepnote_cell_type="code" tags=[]
| notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import gdown
from pathlib import Path
import zipfile
from deepface.commons import functions
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Flatten, Dense, Dropout
import matplotlib.pyplot as plt
from keras.preprocessing import image
import numpy as np
import cv2
# -
from facecrop_image import facecrop
directory = 'C:/Users/NoLifer/Facial-Expression-Recognition/images/'
res_dir = 'C:/Users/NoLifer/Facial-Expression-Recognition/cropped/'
def emotion_analysis(emotions):
objects = ('angry', 'disgust', 'fear', 'happy', 'sad', 'surprise', 'neutral')
y_pos = np.arange(len(objects))
plt.bar(y_pos, emotions, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.ylabel('percentage')
plt.title('emotion')
plt.show()
# +
model = tf.keras.models.load_model('model_deep_face.h5')
model.load_weights('deep_face_weights.h5')
# model = tf.keras.models.load_model('D:ferNet.h5')
# model.load_weights('D:fernet_bestweight.h5')
# +
# path = "D:EmotionRecognition/CNN/test/surprise/PublicTest_3206770.jpg"
# path = "D:EmotionRecognition/images/testv.png"
i=1
# Take the directory of the images
cr_images = os.listdir(directory)
for cr_img in cr_images:
# Take the image
path = directory + cr_img
# Crop only the face
# Show it in the subplot
# Save it in the result path
cropped = facecrop(path,cr_img)
plt.show()
res_path = res_dir + 'Cropped ' +cr_img
true_img = image.load_img(res_path)
img = image.load_img(res_path, target_size=(48, 48),color_mode = "grayscale")
x = image.img_to_array(img)
x = np.expand_dims(x, axis = 0)
x /= 255
custom =model.predict(x)
emotion_analysis(custom[0])
x = np.array(x, 'float32')
x = x.reshape([48, 48]);
i += 1
# plt.show()
# -
| load_models-PlotAll.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#hide
from your_lib.core import *
# # scikit-density
#
# > skdensity is a package for non parametric conditional density estimation using common machine learning algorithms.
#
# ## Conditional probability density estimation (CDE)
# skdensity has 3 main estimator types:
#
# - `EntropyEstimator`
# - Quantizes the output space and performs a regression task on the binned (donwsampled )target. During inference (sampling), retrieve the `predict_proba` method and draw samples from bins (upsampling) according to their probabilities. Any estimator (ensemble as well) with the `predict_proba` method can be used as base estimator.
#
# - `KernelTreeEstimator`
# - Trains a tree regressor (or ensemble) on train data and saves a mapping of the data in each terminal node. During inference, perform the `apply` method of the base estimator and draws samples from the terminal nodes the infered data ended up
# - `KernelTreeEntropyEstimator`
# - same as `KernelTreeEstimator`, but fits a binned tree classifier (just like in `EntropyEstimator`) instead.
#
# ## Kernel Density Estimation
# ## CDE validation metrics
# ## Install
# `pip install your_project_name`
# ## How to use
# Fill me in please! Don't forget code examples:
1+1
| notebooks/index.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ! export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
# %env CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
# +
import os
from enduserLib.xr_netcdf import xr_build_cube_concat_ds
# -
def _mkdir(directory):
try:
os.stat(directory)
except:
os.mkdir(directory)
_mkdir('./junkbox')
working_bucket = 'dev-et-data'
main_prefix = 'enduser/DelawareRiverBasin/Run09_13_2020/ward_sandford_customer/'
year = 2003
def create_s3_list_of_months(main_prefix, year, output_name='etasw_'):
the_list = []
for i in range(1,13):
month = f'{i:02d}'
file_object = main_prefix + str(year) + '/' + output_name + str(year) + month + '.tif'
the_list.append(file_object)
return the_list
# +
working_bucket = 'dev-et-data'
main_prefix = 'enduser/DelawareRiverBasin/Run09_13_2020/ward_sandford_customer/'
years = range(2003,2016)
#output_name = 'etasw_'
#output_name = 'dd_'
output_name = 'srf_'
for year in years:
my_tifs = create_s3_list_of_months(main_prefix, year, output_name)
ds = xr_build_cube_concat_ds(my_tifs, output_name)
nc_file_name = './junkbox' +'/' + output_name + str(year) + '.nc'
ds.to_netcdf(nc_file_name, engine='h5netcdf')
# +
# #! rm -fr junkbox
# -
# !ls -lh ./junkbox
# #! tar cvfz ./junkbox/wzell_etasw.tgz ./junkbox/eta*
# #! tar cvfz ./junkbox/wzell_dd.tgz ./junkbox/dd*
# ! tar cvfz ./junkbox/wzell_srf.tgz ./junkbox/srf*
# !ls -lh ./junkbox/*.tgz
| wzell-sums/00-build-xarrays-and-netcdfs-via-enduserLib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from torchvision.datasets import cifar
cifar.CIFAR10('data', download=True)
import pickle as pk
import numpy as np
data = {'data': [], 'labels': []}
with open('data/cifar-10-batches-py/data_batch_1', 'rb') as file:
tmp = pk.load(file, encoding='latin1')
data['data'].extend(tmp['data'])
data['labels'].extend(tmp['labels'])
with open('data/cifar-10-batches-py/data_batch_2', 'rb') as file:
tmp = pk.load(file, encoding='latin1')
data['data'].extend(tmp['data'])
data['labels'].extend(tmp['labels'])
with open('data/cifar-10-batches-py/data_batch_3', 'rb') as file:
tmp = pk.load(file, encoding='latin1')
data['data'].extend(tmp['data'])
data['labels'].extend(tmp['labels'])
with open('data/cifar-10-batches-py/data_batch_4', 'rb') as file:
tmp = pk.load(file, encoding='latin1')
data['data'].extend(tmp['data'])
data['labels'].extend(tmp['labels'])
with open('data/cifar-10-batches-py/data_batch_5', 'rb') as file:
tmp = pk.load(file, encoding='latin1')
data['data'].extend(tmp['data'])
data['labels'].extend(tmp['labels'])
len(data['data'])
c = {}
for i in range(len(data['data'])):
value = c.get(data['labels'][i], [])
value.append(data['data'][i])
if len(value) == 1:
c[data['labels'][i]] = value
import random
for key, value in c.items():
random.shuffle(value)
labeled = {'data': [], 'labels': []}
unlabeled = {'data': [], 'labels': []}
for key, value in c.items():
for i in range(len(value)):
if i < 500:
labeled['data'].append((value[i].reshape(3, 32, 32).transpose(1, 2, 0)))
labeled['labels'].append(key)
else:
unlabeled['data'].append((value[i].reshape(3, 32, 32).transpose(1, 2, 0)))
unlabeled['labels'].append(key)
k = labeled['data'][8]
import matplotlib.pyplot as plt
plt.imshow(k)
import pickle as pk
with open('data/cifar10/labeled.pk', 'wb') as file:
pk.dump(labeled, file)
with open('data/cifar10/unlabeled.pk', 'wb') as file:
pk.dump(unlabeled, file)
test_data = {'data': [], 'labels': []}
with open('data/cifar-10-batches-py/test_batch', 'rb') as file:
tmp = pk.load(file, encoding='latin1')
test_data['data'].extend(tmp['data'])
test_data['labels'].extend(tmp['labels'])
for i in range(len(test_data['data'])):
image = test_data['data'][i].reshape(3, 32, 32).transpose(1, 2, 0)
test_data['data'][i] = image
with open('data/cifar10/test.pk', 'wb') as file:
pk.dump(test_data, file)
plt.imshow(test_data['data'][4])
| code/data_split.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="HtYmkuX775T4" colab_type="text"
# ## Outline
#
# 1. Recap of what we did last time
# 2. Why do we use batching
# 3. Batching for sequence models
# 4. Padding and packing in PyTorch
# 5. Training with batched dataset
# 6. Comparison of performance with batching and on GPU
# + [markdown] id="tTc4vO5gzXR9" colab_type="text"
# ### Setting up the dependencies
# + id="2uNUaCER7WHb" colab_type="code" colab={}
from io import open
import os, string, random, time, math
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
# + id="Idg_OtiyKSXf" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
# + id="b4GU3MziGYar" colab_type="code" colab={}
import torch
import torch.nn as nn
import torch.optim as optim
# Instantiates the device to be used as GPU/CPU based on availability
device_gpu = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# + id="fAJxDcJIWrq0" colab_type="code" colab={}
from IPython.display import clear_output
# + [markdown] id="7h5jQjsF7-B7" colab_type="text"
# ## Dataset
# + [markdown] id="R2kUDY_4zmy-" colab_type="text"
# ### Pre-processing
# + id="rKjuUO8Q7-tf" colab_type="code" colab={}
languages = []
data = []
X = []
y = []
with open('name2lang.txt', 'r') as f:
for line in f:
line = line.split(',')
name = line[0].strip()
lang = line[1].strip()
if not lang in languages:
languages.append(lang)
X.append(name)
y.append(lang)
data.append((name, lang))
n_languages = len(languages)
# + id="fYQpOJaK83b9" colab_type="code" outputId="2b7bddbf-c1e0-4aa7-f6e7-3129953ab0a9" colab={"base_uri": "https://localhost:8080/", "height": 56}
print(languages)
# + id="sLFeNZiM9H1l" colab_type="code" outputId="35da0e17-299e-415b-b0cd-b01bd6022005" colab={"base_uri": "https://localhost:8080/", "height": 56}
print(data[0:10])
# + [markdown] id="LdDFQsFMKWBG" colab_type="text"
# ### Test-train split
# + id="7rCPOxMNKYVv" colab_type="code" colab={}
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0, stratify=y)
# + id="6tiJj9e9LirJ" colab_type="code" outputId="e0e061aa-644e-4f32-f65d-610146e299eb" colab={"base_uri": "https://localhost:8080/", "height": 56}
print(X_train)
# + id="T4osxIA-PNQN" colab_type="code" outputId="1c1b53f4-be0b-44db-ee56-5f3eb618a4e2" colab={"base_uri": "https://localhost:8080/", "height": 36}
print(len(X_train), len(X_test))
# + [markdown] id="daXjlTRLB_lp" colab_type="text"
# ### Encoding names and language
# + id="5SRc2c_l9MJO" colab_type="code" colab={}
all_letters = string.ascii_letters + " .,;'"
n_letters = len(all_letters)
# + id="_DjSLYo8-Q3X" colab_type="code" colab={}
def name_rep(name):
rep = torch.zeros(len(name), 1, n_letters)
for index, letter in enumerate(name):
pos = all_letters.find(letter)
rep[index][0][pos] = 1
return rep
# + id="kH5o5z0O_LoV" colab_type="code" colab={}
def lang_rep(lang):
return torch.tensor([languages.index(lang)], dtype=torch.long)
# + id="K1uwnfKp_BTK" colab_type="code" outputId="da6fbed2-9751-45b7-e554-3be14396babf" colab={"base_uri": "https://localhost:8080/", "height": 477}
name_rep('Abreu')
# + id="FDL869I7_C4n" colab_type="code" outputId="9f048f3e-6c87-45b8-e7eb-df1447cfe7d4" colab={"base_uri": "https://localhost:8080/", "height": 36}
lang_rep('Portuguese')
# + [markdown] id="vH3KqjOOD6tY" colab_type="text"
# ### Basic visualisation
# + id="FGbo7wNNAVv4" colab_type="code" colab={}
count = {}
for l in languages:
count[l] = 0
for d in data:
count[d[1]] += 1
# + id="eWnmFwl6ENNH" colab_type="code" outputId="db32a4fc-dcda-4587-9713-a377106ced74" colab={"base_uri": "https://localhost:8080/", "height": 56}
print(count)
# + id="Dc9qvxU4EORT" colab_type="code" outputId="10ab3172-38df-4ca7-cecf-715e76e3c463" colab={"base_uri": "https://localhost:8080/", "height": 318}
plt_ = sns.barplot(list(count.keys()), list(count.values()))
plt_.set_xticklabels(plt_.get_xticklabels(), rotation=90)
plt.show()
# + [markdown] id="ne8qqdt0F59E" colab_type="text"
# ## Basic network and testing inference
# + id="cAs526MUFc_C" colab_type="code" colab={}
class RNN_net(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN_net, self).__init__()
self.hidden_size = hidden_size
self.rnn_cell = nn.RNN(input_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input_, hidden = None, batch_size = 1):
out, hidden = self.rnn_cell(input_, hidden)
output = self.h2o(hidden.view(-1, self.hidden_size))
output = self.softmax(output)
return output, hidden
def init_hidden(self, batch_size = 1):
return torch.zeros(1, batch_size, self.hidden_size)
# + id="pU0iZYnHGUhM" colab_type="code" colab={}
n_hidden = 128
net = RNN_net(n_letters, n_hidden, n_languages)
# + id="BoO6JFooHCJ4" colab_type="code" colab={}
def infer(net, name, device = 'cpu'):
name_ohe = name_rep(name).to(device)
output, hidden = net(name_ohe)
if type(hidden) is tuple: # For LSTM
hidden = hidden[0]
index = torch.argmax(hidden)
return output
# + id="W3ZjoIJcImaN" colab_type="code" outputId="b52659ae-ca61-4811-a43c-790dd51265d5" colab={"base_uri": "https://localhost:8080/", "height": 75}
infer(net, 'Adam')
# + [markdown] id="6WscCKVAIEMQ" colab_type="text"
# ## Evaluate model
# + id="uJnxXFA0V701" colab_type="code" colab={}
def dataloader(npoints, X_, y_):
to_ret = []
for i in range(npoints):
index_ = np.random.randint(len(X_))
name, lang = X_[index_], y_[index_]
to_ret.append((name, lang, name_rep(name), lang_rep(lang)))
return to_ret
# + id="AWztceyoV84F" colab_type="code" outputId="6e5303b8-efd4-4f50-e20f-d4a97bb4d3e0" colab={"base_uri": "https://localhost:8080/", "height": 1000}
dataloader(2, X_train, y_train)
# + id="b4d_RfqmP1c4" colab_type="code" colab={}
def eval(net, n_points, topk, X_, y_, device = 'cpu'):
net = net.eval().to(device)
data_ = dataloader(n_points, X_, y_)
correct = 0
for name, language, name_ohe, lang_rep in data_:
output = infer(net, name, device)
val, indices = output.topk(topk)
indices = indices.to('cpu')
if lang_rep in indices:
correct += 1
accuracy = correct/n_points
return accuracy
# + id="E8FAv356Ifqk" colab_type="code" outputId="41f186b7-1983-4574-8c5c-90d55f5aaa53" colab={"base_uri": "https://localhost:8080/", "height": 36}
eval(net, 1000, 1, X_test, y_test)
# + [markdown] id="H2DryJZtg6EB" colab_type="text"
# # Batching
# + id="wPhkg4KlWCWj" colab_type="code" colab={}
def batched_name_rep(names, max_word_size):
rep = torch.zeros(max_word_size, len(names), n_letters)
for name_index, name in enumerate(names):
for letter_index, letter in enumerate(name):
pos = all_letters.find(letter)
rep[letter_index][name_index][pos] = 1
return rep
# + id="mIIIpkbuaNhI" colab_type="code" colab={}
def print_char(name_reps):
name_reps = name_reps.view((-1, name_reps.size()[-1]))
for t in name_reps:
if torch.sum(t) == 0:
print('<pad>')
else:
index = t.argmax()
print(all_letters[index])
# + id="g3VSyh5BWERT" colab_type="code" outputId="61719490-a961-4ea7-e059-c1e2592ecb73" colab={"base_uri": "https://localhost:8080/", "height": 1000}
out_ = batched_name_rep(['Shyam', 'Ram'], 5)
print(out_)
print(out_.shape)
print_char(out_)
# + id="ylDe6hKuWkE-" colab_type="code" colab={}
def batched_lang_rep(langs):
rep = torch.zeros([len(langs)], dtype=torch.long)
for index, lang in enumerate(langs):
rep[index] = languages.index(lang)
return rep
# + id="HjgEOjpvNbwz" colab_type="code" colab={}
def batched_dataloader(npoints, X_, y_, verbose=False, device = 'cpu'):
names = []
langs = []
X_lengths = []
for i in range(npoints):
index_ = np.random.randint(len(X_))
name, lang = X_[index_], y_[index_]
X_lengths.append(len(name))
names.append(name)
langs.append(lang)
max_length = max(X_lengths)
names_rep = batched_name_rep(names, max_length).to(device)
langs_rep = batched_lang_rep(langs).to(device)
padded_names_rep = torch.nn.utils.rnn.pack_padded_sequence(names_rep, X_lengths, enforce_sorted = False)
if verbose:
print(names_rep.shape, padded_names_rep.data.shape)
print('--')
if verbose:
print(names)
print_char(names_rep)
print('--')
if verbose:
print_char(padded_names_rep.data)
print('Lang Rep', langs_rep.data)
print('Batch sizes', padded_names_rep.batch_sizes)
return padded_names_rep.to(device), langs_rep
# + id="vkWwJr4GXjSY" colab_type="code" outputId="5cff042e-6f0c-4524-d0ef-2ea3b480ee34" colab={"base_uri": "https://localhost:8080/", "height": 1000}
p, l = batched_dataloader(3, X_train, y_train, True)
# + [markdown] id="KyvukubrKZf5" colab_type="text"
# ## Training
# + [markdown] id="nO4UWonqaxQm" colab_type="text"
# ### Basic setup
# + id="FMW_J-m_IpKX" colab_type="code" colab={}
def train(net, opt, criterion, n_points):
opt.zero_grad()
total_loss = 0
data_ = dataloader(n_points, X_train, y_train)
total_loss = 0
for name, language, name_ohe, lang_rep in data_:
hidden = net.init_hidden()
for i in range(name_ohe.size()[0]):
output, hidden = net(name_ohe[i:i+1], hidden)
loss = criterion(output, lang_rep)
loss.backward(retain_graph=True)
total_loss += loss
opt.step()
return total_loss/n_points
# + id="hZyvRBSCg_od" colab_type="code" colab={}
def train_batch(net, opt, criterion, n_points, device = 'cpu'):
net.train().to(device)
opt.zero_grad()
batch_input, batch_groundtruth = batched_dataloader(n_points, X_train, y_train, False, device)
output, hidden = net(batch_input)
loss = criterion(output, batch_groundtruth)
loss.backward()
opt.step()
return loss
# + id="TOiTJXgTNmc8" colab_type="code" colab={}
net = RNN_net(n_letters, n_hidden, n_languages)
criterion = nn.NLLLoss()
opt = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
# + id="k3OzbxjJNpny" colab_type="code" outputId="4e3f7852-e3e3-45ed-cd5e-3118aeb40d68" colab={"base_uri": "https://localhost:8080/", "height": 75}
# %%time
train(net, opt, criterion, 256)
# + id="hPt7X0mAi3jz" colab_type="code" colab={}
net = RNN_net(n_letters, n_hidden, n_languages)
criterion = nn.NLLLoss()
opt = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
# + id="gVw0yRLujD9V" colab_type="code" outputId="874a1043-b310-4296-e48b-87307229b039" colab={"base_uri": "https://localhost:8080/", "height": 75}
# %%time
train_batch(net, opt, criterion, 256)
# + [markdown] id="0vVkxII7P1nS" colab_type="text"
# ### Full training setup
# + id="6ga8bJctPJgG" colab_type="code" colab={}
def train_setup(net, lr = 0.01, n_batches = 100, batch_size = 10, momentum = 0.9, display_freq=5, device = 'cpu'):
net = net.to(device)
criterion = nn.NLLLoss()
opt = optim.SGD(net.parameters(), lr=lr, momentum=momentum)
loss_arr = np.zeros(n_batches + 1)
for i in range(n_batches):
loss_arr[i+1] = (loss_arr[i]*i + train_batch(net, opt, criterion, batch_size, device))/(i + 1)
if i%display_freq == display_freq-1:
clear_output(wait=True)
print('Iteration', i, 'Loss', loss_arr[i])
# print('Top-1:', eval(net, len(X_test), 1, X_test, y_test), 'Top-2:', eval(net, len(X_test), 2, X_test, y_test))
plt.figure()
plt.plot(loss_arr[1:i], '-*')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.show()
print('\n\n')
print('Top-1:', eval(net, len(X_test), 1, X_test, y_test, device), 'Top-2:', eval(net, len(X_test), 2, X_test, y_test, device))
# + [markdown] id="7T_28hAea6AB" colab_type="text"
# ## RNN Cell
# + id="xmyRZ5jcG3HO" colab_type="code" outputId="49616dd0-d3c3-464d-9ef2-9a6e68a3267f" colab={"base_uri": "https://localhost:8080/", "height": 417}
# %%time
net = RNN_net(n_letters, 128, n_languages)
train_setup(net, lr=0.15, n_batches=5000, batch_size = 512, display_freq=500) # CPU Training example
# + id="0c5EczLQG9jG" colab_type="code" outputId="45daa3d3-0cb9-4afb-a819-d8318143f8fc" colab={"base_uri": "https://localhost:8080/", "height": 417}
# %%time
net = RNN_net(n_letters, 128, n_languages)
train_setup(net, lr=0.15, n_batches=5000, batch_size = 512, display_freq=100, device = device_gpu) # GPU Training Example
# + [markdown] id="Li5UaF_OY7x9" colab_type="text"
# ## LSTM cell
# + id="A6KSijnhWgtH" colab_type="code" colab={}
class LSTM_net(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(LSTM_net, self).__init__()
self.hidden_size = hidden_size
self.lstm_cell = nn.LSTM(input_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden = None):
out, hidden = self.lstm_cell(input, hidden)
output = self.h2o(hidden[0].view(-1, self.hidden_size))
output = self.softmax(output)
return output, hidden
def init_hidden(self, batch_size = 1):
return (torch.zeros(1, batch_size, self.hidden_size), torch.zeros(1, batch_size, self.hidden_size))
# + id="A3qjhBQfZK4i" colab_type="code" outputId="13dbc210-8587-40b9-d4af-27c276914902" colab={"base_uri": "https://localhost:8080/", "height": 379}
n_hidden = 128
net = LSTM_net(n_letters, n_hidden, n_languages)
train_setup(net, lr=0.15, n_batches=8000, batch_size = 512, display_freq=1000, device = device_gpu)
# + [markdown] id="k0JnIpUmaC_R" colab_type="text"
# ## GRU Cell
# + id="CL6ybA2pZQBZ" colab_type="code" colab={}
class GRU_net(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(GRU_net, self).__init__()
self.hidden_size = hidden_size
self.gru_cell = nn.GRU(input_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden = None):
out, hidden = self.gru_cell(input, hidden)
output = self.h2o(hidden.view(-1, self.hidden_size))
output = self.softmax(output)
return output, hidden
# + id="i_2SCAtvaX2Q" colab_type="code" outputId="737e2a85-b748-4e26-96ca-18dfbd5dbdf9" colab={"base_uri": "https://localhost:8080/", "height": 379}
n_hidden = 128
net = GRU_net(n_letters, n_hidden, n_languages)
train_setup(net, lr=0.15, n_batches=8000, batch_size = 512, display_freq=1000, device = device_gpu)
| 0715_BatchSeqModels-1563203033789.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.1 64-bit
# language: python
# name: python38164bit7c8e2c5783f94c439cf457bea5c85396
# ---
# + [markdown] id="xngCCLB4za5a" colab_type="text"
# #Basic Neural Network to predict a Y value from a given X value
# + id="IK_dlkmmtlvJ" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1599283293847, "user_tz": -330, "elapsed": 2558, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh7bvpDiJbnOqHd5khnVhtfGS1KhD0ET2JCeKvynRQ=s64", "userId": "12167508502914545984"}}
import tensorflow as tf
import numpy as np
from tensorflow import keras
# + id="xdFU5rSjr4QW" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1599283679473, "user_tz": -330, "elapsed": 1317, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh7bvpDiJbnOqHd5khnVhtfGS1KhD0ET2JCeKvynRQ=s64", "userId": "12167508502914545984"}}
model = keras.Sequential([keras.layers.Dense(units = 1, input_shape = [1])])
# + [markdown] id="B30sEj9azoO0" colab_type="text"
# It has 1 layer, and that layer has 1 neuron, and the input shape to it is just 1 value.
# + id="5i46lnmftlFF" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1599283754585, "user_tz": -330, "elapsed": 1268, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh7bvpDiJbnOqHd5khnVhtfGS1KhD0ET2JCeKvynRQ=s64", "userId": "12167508502914545984"}}
model.compile(optimizer = 'sgd', loss = 'mean_squared_error')
# + [markdown] id="-SRdx_xEztgc" colab_type="text"
# It then uses the OPTIMIZER function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with somehting like y=5x+5, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower)
# It will repeat this for the number of EPOCHS which you will see shortly. But first, here's how we tell it to use 'MEAN SQUARED ERROR' for the loss and 'STOCHASTIC GRADIENT DESCENT' for the optimizer.
# + id="TSaeKMumvIm4" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1599283754586, "user_tz": -330, "elapsed": 900, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh7bvpDiJbnOqHd5khnVhtfGS1KhD0ET2JCeKvynRQ=s64", "userId": "12167508502914545984"}}
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
# + [markdown] id="KURetfJ21IyX" colab_type="text"
# #Training
# + id="vg_k0DSpvLuZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1599283761138, "user_tz": -330, "elapsed": 6543, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gh7bvpDiJbnOqHd5khnVhtfGS1KhD0ET2JCeKvynRQ=s64", "userId": "12167508502914545984"}} outputId="bfc6deeb-2429-4670-a7bb-5c10284a6e7b"
model.fit(xs, ys, epochs=500)
# + id="Pl9r-uTxvSGH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1599283782545, "user_tz": -330, "elapsed": 1151, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/<KEY>", "userId": "12167508502914545984"}} outputId="85a86f28-a5db-4eb1-a53c-0f4526d33c86"
model.predict([10.0])
| Basic/Basic Neural Network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/11bender/HDR/blob/main/HDR_notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="VqghhsoSqoKw"
# ## **Handwritten digit recognition (HDR)**
# Backpropagation neural network with Pytorch 💥
#
# ---
#
#
# This project's aim is to realize a system to recognize handwritten digits and convert them to machine language.
#
# The system is developed using a backpropagation neural network trained by the MNIST dataset.
#
# An *effective* feature extraction called [LLF (Local Line Fitting)](https://drive.google.com/file/d/16oAKR5FrQNRoWzxbLgYlUsO3vq2ZX5Im/view?usp=sharing) is used. The method, based on simple geometric operations, is very efficient and yields a relatively low-dimensional and distortion invariant representation.
#
# An important feature of the approach is that **no preprocessing** of the input image is required. A black & white or gray-scale pixel representation is directly used **without thinning, contour following, binarization**, etc. Therefore, high recognition **speed** can be achieved.
#
#
# + id="-laTAPxtzv_k"
import torch
from torch import nn
from torchvision import datasets, transforms
from torch.utils.data import random_split, DataLoader, Dataset
import torch.optim as optim
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
from PIL import ImageOps
from PIL import Image
from PIL import ImageEnhance
# + id="U8JEDZkNzx9Z"
# Download train_data
train_data = datasets.MNIST('data', train=True, download=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="JbgctiKPbf6m" outputId="1d72e3fc-ecde-4925-81d4-1705e0df6e05"
plt.imshow(train_data[20][0])
plt.title(train_data[20][1])
plt.show()
# + [markdown] id="pExXk9EGEAU5"
# ---
# # Feature extraction
#
#
#
#
# + [markdown] id="loUtWFrGG3ch"
# We split the image into 16 cells.
# + id="R43qkja8ERen"
feature_np = np.array(train_data[20][0])
figure = feature_np.reshape(28,28)
# Split figure into 16 cells
cells = [] #16
x = np.vsplit(figure, 4)
for i in range(len(x)):
cells.extend(np.hsplit(x[i],4))
cells_np = np.array(cells)
# + [markdown] id="th3qJD9cHAg7"
# Let's see results!
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="Pcdh-zOjE0bj" outputId="bc06a67d-8c3f-43e6-e74e-8bd3ce027111"
f, axarr = plt.subplots(4,4)
axarr[0,0].imshow(cells_np[0])
axarr[0,1].imshow(cells_np[1])
axarr[0,2].imshow(cells_np[2])
axarr[0,3].imshow(cells_np[3])
axarr[1,0].imshow(cells_np[4])
axarr[1,1].imshow(cells_np[5])
axarr[1,2].imshow(cells_np[6])
axarr[1,3].imshow(cells_np[7])
axarr[2,0].imshow(cells_np[8])
axarr[2,1].imshow(cells_np[9])
axarr[2,2].imshow(cells_np[10])
axarr[2,3].imshow(cells_np[11])
axarr[3,0].imshow(cells_np[12])
axarr[3,1].imshow(cells_np[13])
axarr[3,2].imshow(cells_np[14])
axarr[3,3].imshow(cells_np[15])
# + [markdown] id="GJrUJ-HNHEo_"
# Now we can extract our 3 features from each cell!
# + id="R7cPvY_ZE8aO"
# Extract 3 features from each cell
feature_ex = [] #3*16
for i in range(16):
f1 = np.count_nonzero(cells_np[i] > 30)/49 #feature 1
indices_nonzero = np.nonzero(cells_np[i] > 30)
(y,x) = indices_nonzero
if x.shape==(0,):
f2=0.
f3=1.
else:
x = x.reshape(-1, 1)
y = y.reshape(-1, 1)
reg = LinearRegression().fit(x, y)
f2 = (2*float(reg.coef_))/(1+float(reg.coef_)**2) #feature 2
f3 = (1-float(reg.coef_)**2)/(1+float(reg.coef_)**2) #feature 3
feature_ex.append(f1)
feature_ex.append(f2)
feature_ex.append(f3)
feature_ex_np = np.array(feature_ex).astype('float32')
feature_ex = torch.as_tensor(feature_ex_np) #Don't allocate new memory
# + [markdown] id="xuih7aS3HPdG"
# The result is a tensor of 16*3 elements.
# + colab={"base_uri": "https://localhost:8080/"} id="7Kim1rqPFJf3" outputId="fb3151c5-7340-44a2-ff05-ceebf5079fd4"
feature_ex
# + [markdown] id="FuVd4okFHaWQ"
# Let's grap this in a function!
# + id="vV0yS3isz7ub"
def feature_extraction(figure):
# Split figure into 16 cells
cells = [] #16
x = np.vsplit(figure, 4)
for i in range(len(x)):
cells.extend(np.hsplit(x[i],4))
cells_np = np.array(cells)
# Extract 3 features from each cell
feature_ex = [] #3*16
for i in range(16):
f1 = np.count_nonzero(cells_np[i] > 30)/49 #feature 1
indices_nonzero = np.nonzero(cells_np[i] > 30)
(y,x) = indices_nonzero
if x.shape==(0,):
f2=0.
f3=1.
else:
x = x.reshape(-1, 1)
y = y.reshape(-1, 1)
reg = LinearRegression().fit(x, y)
f2 = (2*float(reg.coef_))/(1+float(reg.coef_)**2) #feature 2
f3 = (1-float(reg.coef_)**2)/(1+float(reg.coef_)**2) #feature 3
feature_ex.append(f1)
feature_ex.append(f2)
feature_ex.append(f3)
feature_ex_np = np.array(feature_ex).astype('float32')
feature_ex = torch.as_tensor(feature_ex_np) #Don't allocate new memory
return feature_ex
# + [markdown] id="D6DusB4ZHkGI"
# We apply the feautre extraction method in our dataset.
# + id="N70Qrmbn0ELu"
# Create new train data
new_train_data = []
for i in range (len(train_data)):
feature_np = np.array(train_data[i][0])
figure = feature_np.reshape(28,28)
feature_ex = feature_extraction(figure)
new_element = (feature_ex, train_data[i][1])
new_train_data.append(new_element)
# + colab={"base_uri": "https://localhost:8080/"} id="fYvrVE02f18V" outputId="6e61933b-fe26-457a-fb8e-f74ca94df7b8"
len(new_train_data)
# + [markdown] id="1ZMrgWi9XMeR"
# We split and we feed our data Loaders
# + id="v2siognL1h4-"
train, val = random_split(new_train_data, [50000, 10000])
train_loader = DataLoader(train, batch_size=24)
val_loader = DataLoader(val, batch_size=24)
# + [markdown] id="VD_pvsyAXSRt"
#
#
# ---
#
#
# # Training the model
# + [markdown] id="DeL6H3rcXZ5Z"
# Let's create the model first!
# + id="yxhg4QJP2GY6"
# Define the model
class ResNet(nn.Module):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(16*3, 50)
self.l2 = nn.Linear(50, 50)
self.do = nn.Dropout(0.1)
self.l3 = nn.Linear(50, 10)
def forward(self, x):
h1 = nn.functional.relu(self.l1(x))
h2 = nn.functional.relu(self.l2(h1))
do = self.do(h2 + h1)
output = self.l3(do)
return output
model = ResNet()
# + colab={"base_uri": "https://localhost:8080/"} id="SBtr1H5s6svm" outputId="58aa65ba-5620-44aa-cb3b-10a7cfb37dc7"
model
# + [markdown] id="GmLFdFAbXiMS"
# We define loss and optimizer functions
# + id="KmWRUDCc2nL1"
loss = nn.CrossEntropyLoss()
# + id="BIBK0r702r98"
optim = optim.Adam(model.parameters(), lr=0.001)
# + [markdown] id="NZaVZgjaX4Kt"
# It's time to train the model
# + colab={"base_uri": "https://localhost:8080/"} id="quszIHHO2uHj" outputId="76c05452-0f85-493d-8cdb-d8cd0a9293fc"
nb_epochs = 12
for epoch in range(nb_epochs):
# Training loop
train_loss = 0
model.train()
for batch in train_loader:
feature, label = batch
#1 forward
output = model(feature)
#2 compute
L = loss(output, label)
#3 clean
model.zero_grad()
#4 backward
L.backward()
#5 Apply
optim.step()
train_loss += L.item() #a tensor
train_loss /= len(train_loader)
# Validation loop
valid_loss = 0
correct = 0
total = 0
model.eval()
for batch in val_loader:
feature, label = batch
#1 forward
with torch.no_grad():
output = model(feature)
#2 compute
L = loss(output, label)
valid_loss += L.item()
correct += torch.sum(torch.argmax(output, dim=1) == label).item()
valid_loss /= len(val_loader)
correct /= len(val_loader.dataset)
print(f"epoch: {epoch+1}, train loss: {train_loss:.4f}, validation loss: {valid_loss:.4f}, correct predictions: {correct*100:.2f}%")
# + [markdown] id="c4vl-WMrYYqr"
# # Testing the model
# + [markdown] id="60ErqRrDaL22"
# ### Testing the model using MNIST test dataset
# + id="xZ-RU0Yk200z"
# Download test_data
test_data = datasets.MNIST('data', train=False, download=True)
# + id="WlmbMAZW7-21"
# Create new test_data
new_test_data = []
for i in range (10000):
feature_np = np.array(test_data[i][0])
figure = feature_np.reshape(28,28)
feature_ex = feature_extraction(figure)
new_element = (feature_ex, test_data[i][1])
new_test_data.append(new_element)
# + id="bx-5mjBg7Vvu"
# Define test loader
test_loader = DataLoader(new_test_data, batch_size=32, shuffle=True)
# + colab={"base_uri": "https://localhost:8080/"} id="6-oPfDgy8LZI" outputId="7667a909-5abd-4f4f-ba1b-6ac1d97ccd09"
# Test loop
test_loss = 0
correct = 0
total = 0
for batch in test_loader:
feature, label = batch
#1 forward
with torch.no_grad():
output = model(feature)
#2 compute
L = loss(output, label)
test_loss += L.item()
correct += torch.sum(torch.argmax(output, dim=1) == label).item()
test_loss /= len(test_loader)
correct /= len(test_loader.dataset)
print(f"correct predictions: {correct*100:.2f}%")
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Z5mT93PgqgTD" outputId="aeb093bd-a0ae-446c-9254-73441eec815e"
for i in range (20):
plt.imshow(test_data[i][0]) #real output
feature_np = np.array(test_data[i][0])
figure = feature_np.reshape(28,28)
image = feature_extraction(figure)
plt.title(torch.argmax(model(image))) #predicted output
plt.show()
# + [markdown] id="btkaKhBBYu6r"
# ### Testing the model by a draw image
# + [markdown] id="oqnbfI8qZQf-"
# We define a function to resize and center the image
# + id="hnKCktHNznIm"
def img_resize(path):
image = Image.open(path)
bw_image = image.convert(mode='L') #L is 8-bit black-and-white image mode
bw_image = ImageEnhance.Contrast(bw_image).enhance(1.5)
# Inver sample, get bbox.
inv_sample = ImageOps.invert(bw_image)
bbox = inv_sample.getbbox()
crop = inv_sample.crop(bbox)
crop.thumbnail((20,20))
#resize back
new_size = 28
delta_w = new_size - crop.size[0]
delta_h = new_size - crop.size[1]
padding = (delta_w//2, delta_h//2, delta_w-(delta_w//2), delta_h-(delta_h//2))
return np.array(ImageOps.expand(crop, padding))
# + id="zRGkoiad-4xn"
x = img_resize('/Users/moncefbender/Desktop/Digit_Pred_e0f55e10-c.png')
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="Vs7EQCpMFaV_" outputId="dce5d4a4-13ab-41d4-e84c-05d48d80cf3f"
plt.imshow(x)
figure = feature_extraction(x)
plt.title(torch.argmax(model(figure),0))
plt.show()
# + [markdown] id="_mWJpwxpuZ-Y"
# ### **Made in ENSIAS. By Benhima & Bender. Supervised By <NAME>.**
| HDR_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:calphad-dev-2]
# language: python
# name: conda-env-calphad-dev-2-py
# ---
# # Calculations with Reference States
#
# ## Experimental Reference States: Formation and Mixing Energy
#
# By default, energies calculated with pycalphad (e.g. `GM`, `HM`, etc.) are the absolute energies as defined in the database and are not calculated with respect to any reference state.
#
# pycalphad `Model` objects allow the reference for the pure components to be set to arbitrary phases and temperature/pressure conditions through the `shift_reference_state` method, which creates new properties for the energies that are referenced to the new reference state, `GMR`, `HMR`, `SMR`, and `CPMR`.
#
# ### Enthalpy of mixing
#
# The enthalpy of mixing in the liquid, analogous to what would be measured experimentally, is calculated and plotted below with the reference states of the pure elements both set to the liquid phase. No temperature and pressure are specified as we would like the reference state to be calculated with respect to the calculation temperature.
# %matplotlib inline
# +
from pycalphad import Database, calculate, Model, ReferenceState, variables as v
import matplotlib.pyplot as plt
dbf = Database("nbre_liu.tdb")
comps = ["NB", "RE", "VA"]
# Create reference states
Nb_ref = ReferenceState("NB", "LIQUID_RENB")
Re_ref = ReferenceState("RE", "LIQUID_RENB")
liq_refstates = [Nb_ref, Re_ref]
# Create the model and shift the reference state
mod_liq = Model(dbf, comps, "LIQUID_RENB")
mod_liq.shift_reference_state(liq_refstates, dbf)
calc_models = {"LIQUID_RENB": mod_liq}
# Calculate HMR for the liquid at 2800 K from X(RE)=0 to X(RE)=1
conds = {v.P: 101325, v.T: 2800, v.X("RE"): (0, 1, 0.01)}
result = calculate(dbf, comps, "LIQUID_RENB", P=101325, T=2800, output="HMR", model=calc_models)
# Plot
fig = plt.figure(figsize=(9,6))
ax = fig.gca()
ax.scatter(result.X.sel(component='RE'), result.HMR, marker='.', s=5, label='LIQUID')
ax.set_xlim((0, 1))
ax.set_xlabel('X(RE)')
ax.set_ylabel('HM_MIX')
ax.set_title('Nb-Re LIQUID Mixing Enthalpy')
ax.legend()
plt.show()
# -
# ### Enthalpy of formation - convex hull
#
# Formation enthalpies are often reported in the literature with respect to the pure elements in their stable phase at 298.15 K. The enthalpy of formation of the phases in equilibrium, analogous to what would be measured experimentally, is calculated and plotted below for T=2800 K, with the reference states of the pure elements both set to the stable phases and fixed at 298.15 K and 1 atm.
# +
from pycalphad import Database, equilibrium, Model, ReferenceState, variables as v
import matplotlib.pyplot as plt
import numpy as np
dbf = Database("nbre_liu.tdb")
comps = ["NB", "RE", "VA"]
phases = dbf.phases.keys()
# Create reference states
Nb_ref = ReferenceState("NB", "BCC_RENB", {v.T: 298.15, v.P: 101325})
Re_ref = ReferenceState("RE", "HCP_RENB", {v.T: 298.15, v.P: 101325})
# Create the models for each phase and shift them all by the same reference states.
eq_models = {}
for phase_name in phases:
mod = Model(dbf, comps, phase_name)
mod.shift_reference_state([Nb_ref, Re_ref], dbf)
eq_models[phase_name] = mod
# Calculate HMR at 2800 K from X(RE)=0 to X(RE)=1
conds = {v.P: 101325, v.T: 2800, v.X("RE"): (0, 1, 0.01)}
result = equilibrium(dbf, comps, phases, conds, output="HMR", model=eq_models)
# Find the groups of unique phases in equilibrium e.g. [CHI_RENB] and [CHI_RENB, HCP_RENB]
unique_phase_sets = np.unique(result.Phase.values.squeeze(), axis=0)
# Plot
fig = plt.figure(figsize=(9,6))
ax = fig.gca()
for phase_set in unique_phase_sets:
label = '+'.join([ph for ph in phase_set if ph != ''])
# composition indices with the same unique phase
unique_phase_idx = np.nonzero(np.all(result.Phase.values.squeeze() == phase_set, axis=1))[0]
masked_result = result.isel(X_RE=unique_phase_idx)
ax.plot(masked_result.X_RE.squeeze(), masked_result.HMR.squeeze(), marker='.', label=label)
ax.set_xlim((0, 1))
ax.set_xlabel('X(RE)')
ax.set_ylabel('HM_FORM')
ax.set_title('Nb-Re Formation Enthalpy (T=2800 K)')
ax.legend()
plt.show()
# -
# ## Special `_MIX` Reference State
#
# pycalphad also includes special mixing reference state that is referenced to the endmembers for that phase with the `_MIX` suffix (`GM_MIX`, `HM_MIX`, `SM_MIX`, `CPM_MIX`). This is particularly useful for seeing how the mixing contributions from physical or excess models affect the energy. The `_MIX` properties are set by default and no instantiation of `Model` objects and calling `shift_reference_state` is required.
#
# Below is an example for calculating this endmember-referenced mixing enthalpy for the $\chi$ phase in Nb-Re. Notice that the four endmembers have a mixing enthalpy of zero.
# +
from pycalphad import Database, calculate
import matplotlib.pyplot as plt
dbf = Database("nbre_liu.tdb")
comps = ["NB", "RE", "VA"]
# Calculate HMR for the Chi at 2800 K from X(RE)=0 to X(RE)=1
result = calculate(dbf, comps, "CHI_RENB", P=101325, T=2800, output='HM_MIX')
# Plot
fig = plt.figure(figsize=(9,6))
ax = fig.gca()
ax.scatter(result.X.sel(component='RE'), result.HM_MIX, marker='.', s=5, label='CHI_RENB')
ax.set_xlim((0, 1))
ax.set_xlabel('X(RE)')
ax.set_ylabel('HM_MIX')
ax.set_title('Nb-Re CHI Mixing Enthalpy')
ax.legend()
plt.show()
# -
# ## Calculations at specific site fractions
#
# In the previous example, the mixing energy for the CHI phase in Nb-Re is sampled by sampling site fractions linearly between endmembers and then randomly across site fraction space.
#
# Imagine now that you'd like to calculate the mixing energy along a single internal degree of freedom (i.e. between two endmembers), referenced to those endmembers.
#
# A custom 2D site fraction array can be passed to the `points` argument of `calculate` and the `HM_MIX` property can be calculated as above.
#
# The sublattice model for CHI is `RE : NB,RE : NB,RE`.
#
# If we are interested in the interaction along the second sublattice when NB occupies the third sublattice, we need to construct a site fraction array of
#
# ```python
# # RE, NB, RE, NB, RE
# [ 1.0, x, 1-x, 1.0, 0.0 ]
# ```
#
# where `x` varies from 0 to 1. This fixes the site fraction of RE in the first sublattice to 1 and the site fractions of NB and RE in the third sublattice to 1 and 0, respectively. Note that the site fraction array is sorted first in sublattice order, then in alphabetic order within each sublattice (e.g. NB is always before RE within a sublattice)
#
#
# +
from pycalphad import Database, calculate
import numpy as np
import matplotlib.pyplot as plt
dbf = Database("nbre_liu.tdb")
comps = ["NB", "RE", "VA"]
# The values for the internal degree of freedom we will vary
n_pts = 1001
x = np.linspace(1e-12, 1, n_pts)
# Create the site fractions
# The site fraction array is ordered first by sublattice, then alphabetically be species within a sublattice.
# The site fraction array is therefore `[RE#0, NB#1, RE#1, NB#2, RE#2]`, where `#0` is the sublattice at index 0.
# To calculate a RE:NB,RE:NB interaction requires the site fraction array to be [1, x, 1-x, 1, 0]
# Note the 1-x is required for site fractions to sum to 1 in sublattice #1.
site_fractions = np.array([np.ones(n_pts), x, 1-x, np.ones(n_pts), np.zeros(n_pts)]).T
print('Site fractions:')
print(site_fractions)
print('Site fractions shape: {} ({} points, {} internal degrees of freedom)'.format(site_fractions.shape, site_fractions.shape[0], site_fractions.shape[1]))
# Calculate HMR for the Chi at 2800 K from Y(CHI, 1, RE)=0 to Y(CHI, 1, RE)=1
# Pass the custom site fractions to the `points` argument
result = calculate(dbf, comps, "CHI_RENB", P=101325, T=2800, points=site_fractions, output='HM_MIX')
# Extract the site fractions of RE in sublattice 1.
Y_CHI_1_RE = result.Y.squeeze()[:, 2]
# Plot
fig = plt.figure(figsize=(9,6))
ax = fig.gca()
ax.scatter(Y_CHI_1_RE, result.HM_MIX, marker='.', s=5)
ax.set_xlim((0, 1))
ax.set_xlabel('Y(CHI, 1, RE)')
ax.set_ylabel('HM_MIX')
ax.set_title('Nb-Re CHI Mixing Enthalpy')
plt.show()
| examples/ReferenceStateExamples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a name="top"></a>
#
# <div style="width:1000 px">
#
# <h1>Imbalance remains small: QG theory's big idea</h1>
# <h3><NAME>, University of Miami, <EMAIL></h3>
# <h3>Fall 2019</h3>
#
# ### Activities
# 1. <a href="#Exercise1">Answer a question about the mathematical derivation</a>
# 1. <a href="#Exercise2">Momentum space: IDV bundle on frontogenesis</a>
# 1. <a href="#Exercise3">Vorticity space: IDV bundle on QG omega</a>
# 1. <a href="#Exercise4">Vorticity space, fancy: IDV bundle on Q-vectors</a>
#
# <div style="clear:both"></div>
# </div>
#
# <hr style="height:2px;">
# ### Background
# The primitive equations:
# 1. Horizontal acceleration equation (F=ma):
#
# $$ \frac{D}{Dt} \vec V_h = f \vec V_h \times \hat k- \vec \nabla_p \Phi +\vec F_r $$
#
# or, in component form,
#
# $$ \frac{Du}{Dt} = fv - \frac{\partial \Phi}{\partial x} +F $$
#
# $$ \frac{Dv}{Dt} = -fu - \frac{\partial \Phi}{\partial y} +G $$
#
# 1. Vertical acceleration equation (aka Hydrostatic balance):
# $$ \frac {\partial \Phi}{\partial p} = - \alpha \equiv -\frac{1}{\rho} = -\frac{RT}{p} $$
#
# 1. Mass continuity:
# $$ 0 = \frac {\partial u}{\partial x} + \frac {\partial v}{\partial y} + \frac {\partial \omega}{\partial p} $$
#
# 1. First Law of thermodynamics:
# $$ \frac {\partial T}{\partial t} = -V_h \cdot \nabla T - \omega \sigma + J/C_p $$
#
# (where $\sigma = \frac {\partial \theta}{\partial p} T/\theta = \frac {\partial (C_pT + gZ)}{\partial p} /C_p$)
#
# The hypothetical state of geostrophic balance (dV/dt=0, F=G=0) defines the *geostrophic wind*:
# $$ fv_g = +\frac{\partial \Phi}{\partial x}$$
#
# $$ fu_g = -\frac{\partial \Phi}{\partial y}$$
#
# We can write "imbalance" as the deviation of the *actual winds* from geostrophy:
# $$ imb_x = fv -\frac{\partial \Phi}{\partial x}$$
#
# $$ imb_y = fu +\frac{\partial \Phi}{\partial y}$$
#
#
#
# <hr style="height:2px;">
# # The Big Idea: imbalance remains small *through time*
#
# $$ imb_x = fv -\frac{\partial \Phi}{\partial x}$$
#
# $$ imb_y = fu +\frac{\partial \Phi}{\partial y}$$
# <a name="Exercise1"></a>
# <div class="alert alert-success">
# <b>EXERCISE 1</b>:
# <ul>
#
# <li>Take the derivative. </li>
# <li>Express the *result* correctly in the cell below. Learn how to typeset equations from the cell above. </li>
# </ul>
# </div>
# ## Exercise 1 Answer: The thermal wind is
# $\frac {\partial u_g}{\partial Z}$ =
#
# $\frac {\partial v_g}{\partial Z}$ =
#
# <hr style="height:2px;">
# <a href="#top">Top</a>
# <hr style="height:2px;">
# <a name="Exercise2"></a>
# ## Exercise 2: Invoke the drilsdown extension. Then find and load the bundle in IDV.
# Execute the code cell below.
#
# Then, launch an IDV session and load the bundle in it. You can do this manually, or -- in the widget that pops up, search The Mapes IDV Collection for the bundle name such as *LMT_1.2*, and click the button next to it labeled **Load bundle**.
# <div class="alert alert-info">
# <b>Note</b>: please email <EMAIL> if you encounter any errors
# </div>
# %reload_ext ipyidv
# <a href="#top">Top</a>
# <hr style="height:2px;">
# <a name="Exercise3"></a>
# <div class="alert alert-success">
# <b>EXERCISE 3</b>:
# <ul>
# <li>**Capture an image. Describe the image. Explain in at least 3 sentences.**
# </li>
# </ul>
# </div>
# Your image
# %make_image -capture legend -caption 'Exercise 3'
# ## Exercise 3 answer:
#
#
# Ipsem lorem
# <a href="#top">Top</a>
# <hr style="height:2px;">
# <a name="Exercise4"></a>
# <div class="alert alert-success">
# <b>EXERCISE 4</b>:
# <ul>
# <li>**Capture an image, and answer the question below.**
# </li>
# </ul>
# </div>
# To use this feature, the IDV display must have its ID set to this name, in Properties.
# %make_image -display sounding
# ## Exercise 4 answer:
#
#
# Ipsem lorem
# <a href="#top">Top</a>
# <hr style="height:2px;">
# <a name="Exercise4"></a>
# <div class="alert alert-success">
# <b>EXERCISE 3</b>:
# <ul>
#
# <li>**Use the right mouse button to suppress (make red) all but 3 time levels in the animation, focusing on the time around or just after the time you used for Exercise 2a. Capture the animation using the cell below.**
# </li>
#
# <li>**Does the cold or warm air advance as you anticipated from Exercise 3a? Does the sounding show a decrease of temperature with time?**
# </li>
#
# <li>**What does this mean about the terms in the First Law? Is the horizontal advection term the dominant effect?**
# </li>
#
# </ul>
# </div>
# %make_movie -capture legend -caption 'Exercise 3'
# ## Exercise 3b answer:
#
# The animation shows ___.
#
# This means ___.
#
# <a href="#top">Top</a>
# <hr style="height:2px;">
# <a name="Exercise4"></a>
# <div class="alert alert-success">
# <b>EXERCISE 4</b>:
# <ul>
#
# <li>**
# Make a new IDV Flow Display of the Thermal Wind. To do this, use the Field Selector to display Formulas-->Miscellaneous-->simple difference a-b. Select "a" to be the 3D-->Derived-->geostrophic wind, at 50000Pa (careful!), and select b to be the geostrophic wind at 850000Pa.
# **</li>
#
# <li>**
# Capture an image of your flow display (vectors, or barbs, or streamlines -- explore!) overlaid on the 700hPa temperature display.
# **</li>
#
#
# </ul>
# </div>
# Your image
# %make_image -capture legend -caption Exercise4
# <div class="alert alert-success">
# <b>EXERCISE 4</b>:
# <ul>
#
# <li>**
# What is the relationship? Does the thermal wind advect temperature?
# **</li>
#
#
# <li>**
# $V_{g,lower} + V_{thermal} = V_{g,upper}$ is the way we customarily speak of the thermal wind.
# **</li>
#
#
# <li>**
# Temperature advection by the geostrophic wind at 500 hPa is thus $(V_{g,850} + V_{thermal}) \cdot \nabla T$. What can you say about its relationship to temperature advection at 850?
# **</li>
#
# </ul>
# </div>
# ## Exercise 4 answer:
#
# Ipsem lorem
# <a href="#top">Top</a>
# <hr style="height:2px;">
# <a name="Exercise5"></a>
# <div class="alert alert-success">
# <b>EXERCISE 5</b>:
# <ul>
#
# <li>**
# Activate the yellow isosurface of wind speed (the jet stream), and the cross sections displays. Deactivate the other displays. Move the cross sections to cut across today's jet stream along the west coast.
# **</li>
#
# <li>**
# Take a view from the south (using the little cube with the blue south face), and adjust it a little so you can see the cross section well, but also the geography.
# **</li>
#
# <li>**
# Capture an image of the cross-section with wind speed contours with a backdrop of T, and a backdrop of theta.
# **</li>
#
#
# </ul>
# </div>
# Your image
# %make_image -capture legend -caption 'Exercise 5'
# ** Which side if the jet is the cold side, and which is the warm side (in the troposphere, and in the stratosphere)? If this jet stream excursion into US latitudes is viewed as a "vortex", is it warm-core or cool-core in the troposphere? **
#
#
# **From an examination of the relvort isosurface, at what altitude is the vorticity of this vortex strongest? **
#
#
# ## Exercise 5 answer:
#
# <NAME>
# <a href="#top">Top</a>
# <hr style="height:2px;">
# <a name="Exercise6"></a>
# <div class="alert alert-success">
# <b>EXERCISE 6</b>:
# <ul>
#
# <li>**
# Make a new 3D isosurface display of potential temperature $\theta$, at a value of 350K, colored by geopotential height.
# **</li>
#
# <li>**
# If 350K is viewed as the tropopause, is it higher on the poleward or equatorward side of the jet?
#
# </ul>
# </div>
# Your image
# %make_image -capture legend -caption Exercise6
# ## Exercise 6 answer:
#
# <NAME>
# <a href="#top">Top</a>
# <hr style="height:2px;">
# <a name="Exercise7"></a>
# <div class="alert alert-success">
# <b>EXERCISE 7</b>:
# <ul>
#
# <li>**
# Examine the Rex block that develops on the west coast late in the period. Capture an image.
# **</li>
#
# <li>**
# See the Wikipedia page on Rex blocks, and link the image to this notebook if you can.
# **</li>
#
# <li>**
# Based on our Irma lessons about vorticity, and the relative vorticity display (which can be adjusted to show both positive and negative values if you like, explain how the Rex block will be a nearly stationary feature, and discuss what areas might get interesting weather (like a lot of precipitation) from such an event.
# **</li>
#
#
# </ul>
# </div>
# Your image
# %make_mage -capture legend -caption Exercise7
# Wikipedia image
from IPython.display import Image
Image("an image URL goes here")
# ## Exercise 7 answer:
#
# Ipsem lorem
# # Upload this notebook to your GitHub fork (I can find it there).
# <a href="#top">Top</a>
# <hr style="height:2px;">
| Notebooks/Imbalance_remains_small.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introductory examples
# ## 1.usa.gov data from bit.ly
# %pwd
path = 'ch02/usagov_bitly_data2012-03-16-1331923249.txt'
open(path).readline()
import json
path = 'ch02/usagov_bitly_data2012-03-16-1331923249.txt'
records = [json.loads(line) for line in open(path)]
records[0]
records[0]['tz']
print(records[0]['tz'])
type(records[0])
#可以通过 in来判断是否存在key,或使用d.get('key',-1),可以返回None或自己指定的值,如-1
'tz' in records[0]
# ### Counting time zones in pure Python
#报错是正常的,因为不是所有的项都有时区
time_zones = [rec['tz'] for rec in records]
time_zones = [rec['tz'] for rec in records if 'tz' in rec]
len(time_zones)
# 直接使用Python语法编写函数
def get_counts(sequence):
counts = {}
for x in sequence:
#判断时区是否在字典,若在字典,则字典值+1,若不在则对应键的值=1
if x in counts:
counts[x] += 1
else:
counts[x] = 1
return counts
# 使用Python标准库更快的编写函数
from collections import defaultdict
#defaultdict,使用字典时,若key不存在则抛出一个error,或返回一个自定的默认值
def get_counts2(sequence):
counts = defaultdict(int) # values will initialize to 0
for x in sequence:
counts[x] += 1
return counts
counts = get_counts(time_zones)
counts['America/New_York']
len(time_zones)
直接使用python编写函数获得前十位的时区及其计数值
#通过设置n=10,可以在复用代码时灵活的选取前n个数量最大的时区
def top_counts(count_dict, n=10):
#返回了一个tuple的列表,然后将列表排序
value_key_pairs = [(count, tz) for tz, count in count_dict.items()]
value_key_pairs.sort()
return value_key_pairs[-n:]
top_counts(counts)
# 通过标准库collections快速实现上述功能
from collections import Counter
counts = Counter(time_zones)
counts.most_common(10)
# ### Counting time zones with pandas
from numpy.random import randn
import numpy as np
import os
import matplotlib.pyplot as plt
import pandas as pd
#DataFrame是将数据表示为一个表格!!!!
# %matplotlib inline
import json
path = 'ch02/usagov_bitly_data2012-03-16-1331923249.txt'
lines = open(path).readlines()
records = [json.loads(line) for line in lines]
# +
from pandas import DataFrame, Series
import pandas as pd
frame = DataFrame(records)
frame
# -
frame['tz'][:10]
tz_counts = frame['tz'].value_counts()
tz_counts[:10]
clean_tz = frame['tz'].fillna('Missing')
print(type(clean_tz))
clean_tz[clean_tz == ''] = 'Unknown'
tz_counts = clean_tz.value_counts()
tz_counts[:10]
plt.figure(figsize=(15, 5))
#在没有执行plot之前不会有图画出
tz_counts[:10].plot(kind='barh', rot=0)
# 还可以查看用户使用的浏览器和操作系统等信息
frame['a'][1]
frame['a'][50]
frame['a'][51]
# 使用正则表达式解析信息
#根据空格分段,取第一段
results = Series([x.split()[0] for x in frame.a.dropna()])
results[:5]
results.value_counts()[:8]
# 将含有字符串中含有'Windows'视为windows用户
cframe = frame[frame.a.notnull()]
#np.whers(判定条件,成立返回,不成立返回)
operating_system = np.where(cframe['a'].str.contains('Windows'),
'Windows', 'Not Windows')
operating_system[:10]
by_tz_os = cframe.groupby(['tz', operating_system])
agg_counts = by_tz_os.size().unstack().fillna(0)
agg_counts[:10]
# Use to sort in ascending order
indexer = agg_counts.sum(1).argsort()
indexer[:10]
count_subset = agg_counts.take(indexer)[-10:]
count_subset
plt.figure()
count_subset.plot(kind='barh', stacked=True)
plt.figure()
normed_subset = count_subset.div(count_subset.sum(1), axis=0)
normed_subset.plot(kind='barh', stacked=True)
# ## MovieLens 1M data set
# +
import pandas as pd
import os
encoding = 'latin1'
upath = os.path.expanduser('ch02/movielens/users.dat')
rpath = os.path.expanduser('ch02/movielens/ratings.dat')
mpath = os.path.expanduser('ch02/movielens/movies.dat')
unames = ['user_id', 'gender', 'age', 'occupation', 'zip']
rnames = ['user_id', 'movie_id', 'rating', 'timestamp']
mnames = ['movie_id', 'title', 'genres']
users = pd.read_csv(upath, sep='::', header=None, names=unames, encoding=encoding)
ratings = pd.read_csv(rpath, sep='::', header=None, names=rnames, encoding=encoding)
movies = pd.read_csv(mpath, sep='::', header=None, names=mnames, encoding=encoding)
# -
users[:5]
ratings[:5]
pd.merge(users,ratings)
movies[:5]
ratings
#首先使用pandas的merge函数将ratings和users合并在一起,然后再将movie也合并进去,根据表格中的键合并
data = pd.merge(pd.merge(ratings, users), movies)
data.iloc[0]
#计算项,新表行索引,列索引,计算方法
mean_ratings = data.pivot_table('rating', index='title',
columns='gender', aggfunc='mean')
mean_ratings[:5]
#根据列索引'title'分组,计算每组的数量
ratings_by_title = data.groupby('title').size()
ratings_by_title[:5]
active_titles = ratings_by_title.index[ratings_by_title >= 250]
active_titles[:10]
#基于标签的索引使用.loc基于未知的索引使用.iloc
mean_ratings = mean_ratings.loc[active_titles]
mean_ratings
mean_ratings = mean_ratings.rename(index={'Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954)':
'Seven Samurai (Shichinin no samurai) (1954)'})
#ascending升序
top_female_ratings = mean_ratings.sort_values(by='F', ascending=False)
top_female_ratings[:10]
# ### Measuring rating disagreement
mean_ratings['diff'] = mean_ratings['M'] - mean_ratings['F']
sorted_by_diff = mean_ratings.sort_values(by='diff')
sorted_by_diff[:15]
# Reverse order of rows, take first 15 rows
sorted_by_diff[::-1][:15]
# Standard deviation of rating grouped by title
rating_std_by_title = data.groupby('title')['rating'].std()
# Filter down to active_titles
rating_std_by_title = rating_std_by_title.loc[active_titles]
# Order Series by value in descending order,原order方法由sort_values替代
rating_std_by_title.sort_values(ascending=False)[:10]
# ### US Baby Names 1880-2010
from numpy.random import randn
import numpy as np
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(12, 5))
np.set_printoptions(precision=4)
# %pwd
# http://www.ssa.gov/oact/babynames/limits.html
# !head -n 10 ch02/names/yob1880.txt
import pandas as pd
names1880 = pd.read_csv('ch02/names/yob1880.txt', names=['name', 'sex', 'births'])
names1880[:10]
names1880.groupby('sex').births.sum()
# +
# 2010 is the last available year right now
years = range(1880, 2011)
pieces = []
columns = ['name', 'sex', 'births']
for year in years:
path = 'ch02/names/yob%d.txt' % year
frame = pd.read_csv(path, names=columns)
frame['year'] = year
pieces.append(frame)
# Concatenate everything into a single DataFrame
names = pd.concat(pieces, ignore_index=True)
# -
#数据透视表
total_births = names.pivot_table('births', index='year',
columns='sex', aggfunc=sum)
total_births.tail()
# %matplotlib inline
total_births.plot(title='Total births by sex and year')
def add_prop(group):
# Integer division floors
births = group.births
group['prop'] = births / births.sum()
return group
names = names.groupby(['year', 'sex']).apply(add_prop)
names
np.allclose(names.groupby(['year', 'sex']).prop.sum(), 1)
def get_top1000(group):
return group.sort_values(by='births', ascending=False)[:1000]
grouped = names.groupby(['year', 'sex'])
top1000 = grouped.apply(get_top1000)
pieces = []
for year, group in names.groupby(['year', 'sex']):
pieces.append(group.sort_values(by='births', ascending=False)[:1000])
top1000 = pd.concat(pieces, ignore_index=True)
top1000.index = np.arange(len(top1000))
top1000
# ### Analyzing naming trends
boys = top1000[top1000.sex == 'M']
girls = top1000[top1000.sex == 'F']
total_births = top1000.pivot_table('births', index='year', columns='name',
aggfunc=sum)
total_births
subset = total_births[['John', 'Harry', 'Mary', 'Marilyn']]
subset.plot(subplots=True, figsize=(12, 10), grid=False,
title="Number of births per year")
# #### Measuring the increase in naming diversity
plt.figure()
table = top1000.pivot_table('prop', index='year',
columns='sex', aggfunc=sum)
table.plot(title='Sum of table1000.prop by year and sex',
yticks=np.linspace(0, 1.2, 13), xticks=range(1880, 2020, 10))
df = boys[boys.year == 2010]
df
prop_cumsum = df.sort_values(by='prop', ascending=False).prop.cumsum()
prop_cumsum[:10]
prop_cumsum.values.searchsorted(0.5)
df = boys[boys.year == 1900]
in1900 = df.sort_values(by='prop', ascending=False).prop.cumsum()
in1900.values.searchsorted(0.5) + 1
# +
def get_quantile_count(group, q=0.5):
group = group.sort_values(by='prop', ascending=False)
return group.prop.cumsum().values.searchsorted(q) + 1
diversity = top1000.groupby(['year', 'sex']).apply(get_quantile_count)
diversity = diversity.unstack('sex')
# -
def get_quantile_count(group, q=0.5):
group = group.sort_values(by='prop', ascending=False)
return group.prop.cumsum().values.searchsorted(q) + 1
diversity = top1000.groupby(['year', 'sex']).apply(get_quantile_count)
diversity = diversity.unstack('sex')
diversity.head()
diversity.plot(title="Number of popular names in top 50%")
# #### The "Last letter" Revolution
# +
# extract last letter from name column
get_last_letter = lambda x: x[-1]
last_letters = names.name.map(get_last_letter)
last_letters.name = 'last_letter'
table = names.pivot_table('births', index=last_letters,
columns=['sex', 'year'], aggfunc=sum)
# -
subtable = table.reindex(columns=[1910, 1960, 2010], level='year')
subtable.head()
subtable.sum()
letter_prop = subtable / subtable.sum().astype(float)
# +
import matplotlib.pyplot as plt
fig, axes = plt.subplots(2, 1, figsize=(10, 6))
letter_prop['M'].plot(kind='bar', rot=0, ax=axes[0], title='Male')
letter_prop['F'].plot(kind='bar', rot=0, ax=axes[1], title='Female',
legend=False)
# -
plt.subplots_adjust(hspace=0.25)
# +
letter_prop = table / table.sum().astype(float)
dny_ts = letter_prop.loc[['d', 'n', 'y'], 'M'].T
dny_ts.head()
# -
plt.close('all')
dny_ts.plot()
# #### Boy names that became girl names (and vice versa)
all_names = top1000.name.unique()
mask = np.array(['lesl' in x.lower() for x in all_names])
lesley_like = all_names[mask]
lesley_like
filtered = top1000[top1000.name.isin(lesley_like)]
filtered.groupby('name').births.sum()
table = filtered.pivot_table('births', index='year',
columns='sex', aggfunc='sum')
table = table.div(table.sum(1), axis=0)
table.tail()
plt.close('all')
table.plot(style={'M': 'k-', 'F': 'k--'})
| branches/1st-edition/ch02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: lbl_amanzi
# language: python
# name: python3
# ---
# ---
# +
import geopandas as gpd
site_json_file_path = "../../data/LMsites.json"
sites = gpd.read_file(site_json_file_path)
print(sites.shape)
sites
# +
import importlib
from climate_resilience import downloader as dd
importlib.reload(dd)
output_dir = "gee_downloader_testing" # Folder located in Google Drive. Will be created if not already present.
site_json_file_path = "../../../data/LMsites.json"
yaml_path = "../scripts/download_params.yml"
sd_obj = dd.SitesDownloader(
folder=output_dir,
site_json_file_path=site_json_file_path,
# latitude_range=(30, 50),
longitude_range=(-150, -120),
)
print(sd_obj.sites.shape)
# -
sd_obj.sites
sd_obj.sites
'1' + 3
| examples/climate-resilience/notebooks/scratch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torchvision
torch.__version__
# # 4.5 多GPU并行训练
#
# 在我们进行神经网络训练的时候,因为计算量巨大所以单个GPU运算会使得计算时间很长,使得我们不能够及时的得到结果,例如我们如果使用但GPU使用ImageNet的数据训练一个分类器,可能会花费一周甚至一个月的时间。所以在Pytorch中引入了多GPU计算的机制,这样使得训练速度可以指数级的增长。
#
# stanford大学的[DAWNBench](https://dawn.cs.stanford.edu/benchmark/) 就记录了目前为止的一些使用多GPU计算的记录和实现代码,有兴趣的可以看看。
#
# 这章里面我们要介绍的三个方式来使用多GPU加速
# ## 4.5.1 torch.nn.DataParalle
# 一般情况下我们都会使用一台主机带多个显卡,这样是一个最节省预算的方案,在Pytorch中为我们提供了一个非常简单的方法来支持但主机多GPU,那就`torch.nn.DataParalle` 我们只要将我们自己的模型作为参数,直接传入即可,剩下的事情Pytorch都为我们做了
#使用内置的一个模型,我们这里以resnet50为例
model = torchvision.models.resnet50()
#模型使用多GPU
mdp = torch.nn.DataParallel(model)
mdp
# 只要这样一个简单的包裹,Pytorch已经为我们做了很多复杂的工作。我们只需要增大我们训练的batch_size(一般计算为N倍,N为显卡数量),其他代码不需要任何改动。
# 虽然代码不需要做更改,但是batch size太大了训练收敛会很慢,所以还要把学习率调大一点。大学率也会使得模型的训练在早期的阶段变得十分不稳定,所以这里需要一个学习率的热身(warm up) 来稳定梯度的下降,然后在逐步的提高学习率。
#
# 这种热身只有在超级大的批次下才需要进行,一般我们这种一机4卡或者说在batch size 小于 5000(个人测试)基本上是不需要的。例如最近富士通使用2048个GPU,74秒训练完成resnet50的实验中使用的batch size 为 81920 [arivx](http://www.arxiv.org/abs/1903.12650)这种超大的size才需要。
# DataParallel的并行处理机制是,首先将模型加载到主 GPU 上(默认的第一个GPU,GPU0为主GPU),然后再将模型复制到各个指定的从 GPU 中,然后将输入数据按 batch 维度进行划分,具体来说就是每个 GPU 分配到的数据 batch 数量是总输入数据的 batch 除以指定 GPU 个数。每个 GPU 将针对各自的输入数据独立进行 forward 计算,最后将各个 GPU 的 loss 进行求和,再用反向传播更新单个 GPU 上的模型参数,再将更新后的模型参数复制到剩余指定的 GPU 中,这样就完成了一次迭代计算。
#
# DataParallel其实也是一个nn.Model所以我们可以保存权重的方法和一般的nn.Model没有区别,只不过如果你想使用单卡或者cpu作为推理的时候需要从里面读出原始的model
#获取到原始的model
m=mdp.module
m
# DataParallel会将定义的网络模型参数默认放在GPU 0上,所以dataparallel实质是可以看做把训练参数从GPU拷贝到其他的GPU同时训练,这样会导致内存和GPU使用率出现很严重的负载不均衡现象,即GPU 0的使用内存和使用率会大大超出其他显卡的使用内存,因为在这里GPU0作为master来进行梯度的汇总和模型的更新,再将计算任务下发给其他GPU,所以他的内存和使用率会比其他的高。
#
# 所以我们使用新的torch.distributed来构建更为同步的分布式运算。使用torch.distributed不仅可以支持单机还可以支持多个主机,多个GPU进行计算。
# ## 4.5.2 torch.distributed
# `torch.distributed`相对于`torch.nn.DataParalle` 是一个底层的API,所以我们要修改我们的代码,使其能够独立的在机器(节点)中运行。我们想要完全实现分布式,并且在每个结点的每个GPU上独立运行进程,这一共需要N个进程。N是我们的GPU总数,这里我们以4来计算。
#
# 首先 初始化分布式后端,封装模型以及准备数据,这些数据用于在独立的数据子集中训练进程。修改后的代码如下
# +
# 以下脚本在jupyter notebook执行肯定会不成功,请保存成py文件后测试
from torch.utils.data.distributed import DistributedSampler
from torch.utils.data import DataLoader
# 这里的node_rank是本地GPU的标识
parser = argparse.ArgumentParser()
parser.add_argument("--node_rank", type=int)
args = parser.parse_args()
# 使用Nvdea的nccl来初始化节点
torch.distributed.init_process_group(backend='nccl')
# 封装分配给当前进程的GPU上的模型
device = torch.device('cuda', arg.local_rank)
model = model.to(device)
distrib_model = torch.nn.parallel.DistributedDataParallel(model,
device_ids=[args.node_rank],
output_device=args.node_rank)
# 将数据加载限制为数据集的子集(不包括当前进程)
sampler = DistributedSampler(dataset)
dataloader = DataLoader(dataset, sampler=sampler)
for inputs, labels in dataloader:
predictions = distrib_model(inputs.to(device)) # 正向传播
loss = loss_function(predictions, labels.to(device)) # 计算损失
loss.backward() # 反向传播
optimizer.step() # 优化
# -
# 在运行时我们也不能简单的使用`python 文件名`来执行了,我们这里需要使用PyTorch中为我们准备好的torch.distributed.launch运行脚本。它能自动进行环境变量的设置,并使用正确的node_rank参数调用脚本。
#
# 这里我们要准备一台机器作为master,所有的机器都要求能对它进行访问。因此,它需要拥有一个可以访问的IP地址(示例中为:172.16.31.10)以及一个开放的端口(示例中为:6666)。我们将使用torch.distributed.launch在第一台机器上运行脚本:
# ```bash
# python -m torch.distributed.launch --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr="192.168.100.100" --master_port=6666 文件名 (--arg1 --arg2 等其他参数)
# ```
# 第二台主机上只需要更改 `--node_rank=0`即可
# 很有可能你在运行的时候报错,那是因为我们没有设置NCCL socket网络接口
# 我们以网卡名为ens3为例,输入
# ```bash
# export NCCL_SOCKET_IFNAME=ens3
# ```
# ens3这个名称 可以使用ifconfig命令查看确认
# 参数说明:
#
# --nproc_per_node : 主机中包含的GPU总数
#
# --nnodes : 总计的主机数
#
# --node_rank :主机中的GPU标识
#
# 其他一些参数可以查看[官方的文档](https://github.com/pytorch/pytorch/blob/master/torch/distributed/launch.py)
# torch.distributed 不仅支持nccl还支持其他的两个后端 gloo和mpi,具体的对比这里就不细说了,请查看[官方的文档](https://pytorch.org/docs/stable/distributed.html)
# ## 4.5.3 torch.utils.checkpoint
# 在我们训练时,可能会遇到(目前我还没遇到)训练集的单个样本比内存还要大根本载入不了,那我我们如何来训练呢?
#
# pytorch为我们提供了梯度检查点(gradient-checkpointing)节省计算资源,梯度检查点会将我们连续计算的元正向和元反向传播切分成片段。但由于需要增加额外的计算以减少内存需求,该方法效率会有一些下降,但是它在某些示例中有较为明显的优势,比如在长序列上训练RNN模型,这个由于复现难度较大 就不介绍了,官方文档在[这里](https://pytorch.org/docs/stable/checkpoint.html) 遇到这种情况的朋友可以查看下官方的解决方案。
| Deep-learning-framework/pytorch/pytorch-handbook/chapter4/.ipynb_checkpoints/4.5-multiply-gpu-parallel-training-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Introduction
# Lets work through https://www.tensorflow.org/tutorials/image_retraining but using satellite jpg images I downloaded locally from https://sites.google.com/view/zhouwx/dataset?authuser=0. The retrain script is on https://github.com/tensorflow/hub/tree/master/examples/image_retraining (requires JPG, Tiff doesn't work). By default
# it uses the feature vectors computed by Inception V3 trained on ImageNet, which is highly accurate, but comparatively large and slow. It's recommended to start with this to validate that you have gathered good training data, but if you want to deploy on resource-limited platforms, you can try the `--tfhub_module` flag with a Mobilenet model.
#
# ### The image dataset
# PatternNet is a large-scale high-resolution remote sensing dataset collected for remote sensing image retrieval. There are 38 classes and each class has 800 images of size 256×256 pixels. The images in PatternNet are collected from Google Earth imagery or via the Google Map API for some US cities. The following table shows the classes and the corresponding spatial resolutions. The figure shows some example images from each class.
import glob
import tensorflow as tf
import os
import pandas as pd
import subprocess
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
from IPython.display import Image
data_path = "/Users/robincole/Documents/Data/PatternNet/Images"
retrain_script_path = "/Users/robincole/Documents/Github/tensorflow-hub/examples/image_retraining/retrain.py"
image_folders = os.listdir(data_path)
print("I have {} folders of images".format(len(image_folders)))
image_folders
print("Each folder has approximately {} images".format(len(os.path.join(data_path, image_folders[0]))))
# Lets plot one of the images
img_path = glob.glob(data_path + '/sparse_residential/*', recursive=True)[0]
print(img_path)
img = plt.imread(img_path)
print(img.shape)
plt.imshow(img);
# ## Perform the transfer learning with Inception V3
#
# The `retrain.py` script will take care of splitting data into test/train/validate. Execute retrain with the following:
run_command_str = "python3 {} --image_dir {}".format(retrain_script_path, data_path)
print(run_command_str)
# ## Evaluate training
#
# Start retrain at 11:55, finished at 14:27, so approximately 2.5 hours.
# Visualize the summaries with this command:
#
# `tensorboard --logdir /tmp/retrain_logs`
Image("../resources/tensorboard.png", width=800)
# The training accuracy shows the percentage of the images used in the current training batch that were labeled with the correct class.
#
# The validation accuracy is the precision (percentage of correctly-labelled images) on a randomly-selected group of images from a different set.
#
# Lets get the final train and validation data:
train_accuracy = pd.read_csv("../resources/run_train-tag-accuracy_1.csv")
validation_accuracy = pd.read_csv("../resources/run_validation-tag-accuracy_1.csv")
#plt.plot(train_accuracy['Value'], label='train')
plt.plot(validation_accuracy['Value'], label='validation')
plt.ylim((0.85, 1.0))
plt.legend();
# Validation accuracy is about 96%
model_path = "/Users/robincole/Documents/Data/PatternNet/inception_v3/output_graph.pb"
file_size = int(os.stat(model_path).st_size/1e6)
print("The generated model (`.pb`) file is {} MB".format(file_size))
# +
labels_path = "/Users/robincole/Documents/Data/PatternNet/inception_v3/output_labels.txt"
labels = []
with open(labels_path) as f:
for line in f:
labels.append(line.strip('\n'))
print("The number of labels is {} and we expected {}".format(len(labels), len(image_folders)))
# -
# We can serve the model with
#
# `tensorflow_model_server --port=9000 --model_name=my_image_classifier --model_base_path=/Users/robincole/Documents/Data/PatternNet`
#
# ERROR `tensorflow_model_server: command not found`
#
# ## Using the model
#
# We now use the model using `label_image.py` from https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/examples/label_image/label_image.py
poets_folder = "/Users/robincole/Documents/Github/tensorflow-for-poets-2/scripts"
os.listdir(poets_folder)
# Lets first pass one of the images used during training. The command we require is:
"python3 {} --image={} --graph={} --labels={} --input_layer=Placeholder --output_layer=final_result".format("label_image.py", img_path, model_path, labels_path) # --output_layer=final_result:0
# Response is:
# ```
# forest 0.9981623
# river 0.00075750035
# sparse residential 0.00010027897
# oil gas field 8.9558e-05
# railway 8.550265e-05
# ```
#
# NOTE I changed the image used to include a residential
# ## mobilenet_v2
# Lets retrain with [mobilenet_v2](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet), where we sacrifice a little accuracy for much smaller file sizes and/or faster speeds.
#
# Start retrain at 16:30, finished at 17:40, so 1 hr 20
#
# Required command is:
run_command_str = "python3 {} --image_dir {} --tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/1".format(retrain_script_path, data_path)
print(run_command_str)
Image("../resources/tensorboard_mobilenet.png", width=800)
# Validation accuracy at 97%
#
# Since the mobilenet model is small (> 10 MB) we can uplodad it to Github
model_path = "mobilenet_v2/output_graph.pb"
file_size = int(os.stat(model_path).st_size/1e6)
print("The generated model (`.pb`) file is {} MB".format(file_size))
label_file = "mobilenet_v2/output_labels.txt"
# Lets make a prediction directly with this model:
from label_image import load_graph, load_labels, read_tensor_from_image_file
graph = load_graph(model_path)
img_path
input_height = 224
input_width = 224
t = read_tensor_from_image_file(img_path, input_height=input_height, input_width=input_width) # Return the reshaped image
t.shape
input_operation = graph.get_operation_by_name("import/Placeholder")
output_operation = graph.get_operation_by_name("import/final_result")
# +
with tf.Session(graph=graph) as sess:
results = sess.run(output_operation.outputs[0], {
input_operation.outputs[0]: t
})
results = np.squeeze(results)
top_k = results.argsort()[-5:][::-1]
labels = load_labels(label_file)
for i in top_k:
print(labels[i], results[i])
# -
# Lets put the data in a common format used by ResNet etc.
# +
data = {}
data["predictions"] = []
for i in top_k:
r = {"label": labels[i], "probability": round(results[i], 5)}
data["predictions"].append(r)
data["success"] = True
data
# -
| land_classification/tensorflow/Tensorflow transfer learning 16-4-2018.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Solve Problem 2.3 with $P=0$ (no force applied at node 3) and with node 5 given a fixed, known displacement of $\delta$ as shown in Figure P2-4
# a. Obtain the global stiffness matrix by direct superposition
# $$
# \left\{\begin{array}{l}
# f_{1 x}^{(1)} \\
# f_{2 x}^{(1)}
# \end{array}\right\}=\left[\begin{array}{cc}
# k & -k \\
# -k & k
# \end{array}\right]\left\{\begin{array}{l}
# u_{1} \\
# u_{2}
# \end{array}\right\}
# $$
# $$
# \left\{\begin{array}{l}
# f_{2 x}^{(2)} \\
# f_{3 x}^{(2)}
# \end{array}\right\}=\left[\begin{array}{cc}
# k & -k \\
# -k & k
# \end{array}\right]\left\{\begin{array}{l}
# u_{2} \\
# u_{3}
# \end{array}\right\}
# $$
# $$
# \left\{\begin{array}{l}
# f_{3 x}^{(3)} \\
# f_{4 x}^{(3)}
# \end{array}\right\}=\left[\begin{array}{cc}
# k & -k \\
# -k & k
# \end{array}\right]\left\{\begin{array}{l}
# u_{3} \\
# u_{4}
# \end{array}\right\}
# $$
# $$
# \left\{\begin{array}{l}
# f_{4 x}^{(4)} \\
# f_{5 x}^{(4)}
# \end{array}\right\}=\left[\begin{array}{cc}
# k & -k \\
# -k & k
# \end{array}\right]\left\{\begin{array}{l}
# u_{4} \\
# u_{5}
# \end{array}\right\}
# $$
# By assembling the equations we get global matrix as
#
# $$
# \left\{\begin{array}{I}
# F_{1 x} \\
# F_{2 x} \\
# F_{3 x} \\
# F_{4 x} \\
# F_{5 x}
# \end{array}\right\}=\left[\begin{array}{cc}
# k & -k & 0 & 0 & 0 \\
# -k & 2 k & -k & 0 & 0 \\
# 0 & -k & 2 k & -k & 0 \\
# 0 & 0 & -k & 2 k & -k \\
# 0 & 0 & 0 & -k & k
# \end{array}\right]\left\{\begin{array}{l}
# u_{1} \\
# u_{2} \\
# u_{3} \\
# u_{4} \\
# u_{5}
# \end{array}\right\}
# $$
#
# $$
# \left\{\begin{array}{c}
# \operatorname{six} \\
# 0 \\
# 0 \\
# 0 \\
# F_{5 x}
# \end{array} \mid=\left[\begin{array}{ccccc}
# k & -k & 0 & 0 & 0 \\
# -k & 2 k & -k & 0 & 0 \\
# 0 & -k & 2 k & -k & 0 \\
# 0 & 0 & -k & 2 k & -k \\
# 0 & 0 & 0 & -k & k
# \end{array}\right]\left\{\begin{array}{l}
# 10 \\
# u_{2} \\
# u_{3} \\
# u_{4} \\
# \delta
# \end{array}\right\}\right.
# $$
#
# $$
# \begin{array}{l}
# F_{1 x}=-k u_{2} \\
# 0=2 k u_{2}-k u_{3} \\
# 0=-k u_{2}+2 k u_{3}-k u_{4} \\
# 0=-k u_{3}+2 k u_{4}-k \delta \\
# F_{5 x}=-k u_{4}+k \delta
# \end{array}
# $$
#
# $$
# \begin{array}{l}
# u_{2}=\frac{1}{2} u_{3} \\
# x_{4}=\frac{u_{3}+\delta}{2} \\
# -\frac{1}{2} u_{2}+2 u_{3}-\frac{8}{2}-\frac{u_{3}}{2}=0 . \\
# u_{3}=\frac{8}{2} \\
# u_{4}=\frac{3}{4} \delta \\
# u_{2}=\frac{8}{4}
# \end{array}
# $$
| chapters/chapter. 2/Problem 2.4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
# from utils import visualise
from read_mnist import load_data
import random
y_train,x_train,y_test,x_test=load_data()
print("Train data label dim: {}".format(y_train.shape))
print("Train data features dim: {}".format(x_train.shape))
print("Test data label dim: {}".format(y_test.shape))
print("Test data features dim:{}".format(x_test.shape))
# visualise(x_train)
# +
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
import os
from torchvision import transforms, datasets
from model import NeuralNet
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
device = torch.device('cuda' if torch.cuda.is_available else 'cpu')
print("==>>> device is", device)
# Hyper-parameters
input_size = 784
hidden_size = 256
num_classes = 10
num_epochs = 1000
lr = 0.1
batch_size = 128
train_set = []
for i in range(x_train.shape[0]):
train_set.append(( torch.Tensor(x_train[i].reshape(-1, 28, 28)), y_train[i] ))
test_set = []
for i in range(x_test.shape[0]):
test_set.append(( torch.Tensor(x_test[i].reshape(-1, 28, 28)), y_test[i] ))
train_loader = torch.utils.data.DataLoader(
dataset=train_set,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(
dataset=test_set,
batch_size=batch_size,
shuffle=False)
print("==>>> total trainning batch number: {}".format(len(train_loader)))
print("==>>> total testing batch number: {}".format(len(test_loader)))
print("==>>> total number of batches are: {}".format(batch_size))
for index, batch in enumerate(train_loader):
inputs = batch[0]
labels = batch[1]
if(index == 0):
print("==>>> input shape of a batch is: {}".format(inputs.shape))
print("==>>> labels shape of a batch is: {}".format(labels.shape))
model = NeuralNet(input_size, hidden_size, num_classes).to(device)
print(model)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr)
num_batches = len(train_loader)
train_loss = []
epoch_counter = []
cnt = 0
for epoch in range(num_epochs):
for idx, (inputs, labels) in enumerate(train_loader):
inputs = inputs.reshape(-1,28*28).to(device)
labels = labels.to(device)
# Forward propagation
outputs = model(inputs)
loss = loss_fn(outputs, labels)
# backward pass and make step
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss.append(loss.item())
cnt += 1
epoch_counter.append(cnt)
if(epoch%50==0):
print("epoch is {}/{} loss is: {}".format(epoch, num_epochs, loss.item()))
plt.plot(epoch_counter, train_loss)
torch.save(model.state_dict(), 'model.pth')
torch.save(model.state_dict(), 'optimizer.pth')
with torch.no_grad():
correct = 0
total = 0
for idx, (inputs, labels) in enumerate(test_loader):
inputs = inputs.reshape(-1, 28*28).to(device)
labels = labels.to(device)
preds = model(inputs)
values, indices = torch.max(preds, 1)
total += labels.shape[0]
correct += (labels == indices).sum().item()
print("Accuracy of the network is: {}%".format(100*correct / total) )
# -
| .ipynb_checkpoints/Pytorch-NN-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
# +
def draw(x1,x2):
ln=plt.plot(x1,x2)
def sigmoid(score):
return 1/(1+np.exp(-score))
def error(line_params,points,z):
m=points.shape[0]
p=sigmoid(points*line_params)
cross_entropy=-(np.log(p).T*z+np.log(1-p).T*(1-z))*(1/m)
return cross_entropy
def gradient_descent(line_params,points,z,alpha):
for i in range(500):
m=points.shape[0]
p=sigmoid(points*line_params)
gradient=(points.T*(p-z))*(alpha/m)
line_params=line_params-gradient
w1=line_params.item(0)
w2=line_params.item(1)
b=line_params.item(2)
x=np.array([bottom_region[:,0].min(),top_region[:,0].max()])
#w1x+w2y+b=0
y=-b/w2+x*(-w1/w2)
draw(x,y)
# +
n_pts=100
#random_x_values=np.random.normal(10,2,n_pts)
#random_y_values=np.random.normal(10,2,n_pts)
np.random.seed(0)
bias=np.ones(n_pts)
top_region=np.array([np.random.normal(10,2,n_pts),np.random.normal(12,2,n_pts),bias]).T
bottom_region=np.array([np.random.normal(5,2,n_pts),np.random.normal(6,2,n_pts),bias]).T
all_points=np.vstack((top_region,bottom_region))
line_params=np.matrix([np.zeros(3)]).T
z=np.array([np.zeros(n_pts),np.ones(n_pts)]).reshape(n_pts*2,1)
# +
_, ax=plt.subplots(figsize=(4,4))
ax.scatter(top_region[:,0],top_region[:,1],color='r')
ax.scatter(bottom_region[:,0],bottom_region[:,1],color='b')
gradient_descent(line_params,all_points,z,0.06)
plt.show()
print(error(line_params,all_points,z))
| perceptron/linear_reg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/fridymandita/Struktur-Data/blob/main/Struktur_Data_Bubble_Sort.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="nHc5AsxIwtKe"
# # Bubble Sort
# + colab={"base_uri": "https://localhost:8080/"} id="X4xzSeA6wz6U" outputId="b054d759-7e97-4b95-e7f5-3b483c4e0627"
# !curl -s -o setup.sh https://raw.githubusercontent.com/tproffen/ORCSGirlsPython/master/Algorithms/Helpers/setup_activity2.sh
# !bash setup.sh
# + id="8epCIKhMw4JM"
# %matplotlib inline
import numpy as np
from Helpers.helpers import *
# + [markdown] id="ieERacllw6cN"
#
#
# ```
# # This is formatted as code
# ```
#
# ## Data
# + colab={"base_uri": "https://localhost:8080/"} id="8EAzdjJmw-0e" outputId="08dcba5d-874b-4a12-e97e-0afeba8f931c"
data=[29,99,27,41,66,28,44,78,87,19,31,76,58,88,83,97,12,21,44]
print ("Data: ",data)
# + colab={"base_uri": "https://localhost:8080/", "height": 374} id="Gz3YyvaixGDm" outputId="896904e2-0767-4953-e4c2-356b5f867853"
updateGraph(1,data)
# + id="NE_q6NC80awt"
def bubble_sort(data):
# We go through the list as many times as there are elements
for i in range(len(data)):
# We want the last pair of adjacent elements to be (n-2, n-1)
for j in range(len(data) - 1):
if data[j] > data[j+1]:
# Swap
data[j], data[j+1] = data[j+1], data[j]
# + colab={"base_uri": "https://localhost:8080/"} id="T1xWHctaHmGS" outputId="8d5b1841-2fda-4e55-96e8-5018aea7cbbd"
bubble_sort(data)
print(data)
# + colab={"base_uri": "https://localhost:8080/", "height": 374} id="bqEHNVXNH7lg" outputId="9e74c9f6-57a2-4540-903b-58b81ee749e6"
updateGraph(0,data)
| Struktur_Data_Bubble_Sort.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # _Experimentation_
#
# **TL;DR** --> Explore text and experiment with `re` library/regular expressions.
# +
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
# import libraries
import fundamentals
import pandas as pd
pd.options.display.max_columns = None
import numpy as np
import random
import string
import os
import re
from tqdm.autonotebook import tqdm
tqdm.pandas()
# Matplotlib
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('fivethirtyeight')
# -
# ## _Load Data_
# +
# strings of file paths and file name for data
origpath = "/notebooks/CovidDisinfo-Detect/experiments"
datapath = "/notebooks/CovidDisinfo-Detect/data/interim"
filename = "covid19_20200324.pkl"
# load data into pandas dataframe
df = fundamentals.load_data(origpath, datapath, filename)
# -
# ## _Regex_
# ### "china"
def china_search(text):
"""
Searches for a given term within a text.
"""
regex = re.compile(r"china", re.I)
return regex.findall(text)
china = df["tweet"].progress_apply(lambda x: len(china_search(x)))
china.value_counts()
# ### "5g"
def fiveg_search(text):
"""
Searches for a given term within a text.
"""
regex = re.compile(r"5g", re.I)
return regex.findall(text)
fiveg = df["tweet"].progress_apply(lambda x: len(fiveg_search(x)))
fiveg.value_counts()
# ### "bioweapon"
def bioweapon_search(text):
"""
Searches for bioweapon withing text.
"""
regex = re.compile(r"bioweapon", re.I)
return regex.findall(text)
bioweapon = df["tweet"].progress_apply(lambda x: len(bioweapon_search(x)))
bioweapon.value_counts()
# ### "flu"
def flu_search(text):
"""
Searches for the term flu within text.
"""
regex = re.compile(r"flu", re.I)
return regex.findall(text)
flu = df["tweet"].progress_apply(lambda x: len(flu_search(x)))
flu.value_counts()
# ### "silver solution"
def silver_search(text):
"""
Searches for term "silver solution" within a text.
"""
regex = re.compile(r"silver", re.I)
return regex.findall(text)
silver = df["tweet"].progress_apply(lambda x: len(silver_search(x)))
silversol.value_counts()
| experiments/playground_nbs/regex-experiments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Using the MANN Package to train a Fully Connected Neural Network
#
# In this notebook, the MANN package will be used to train pruned fully connected neural networks. We will train two single-task networks on two separate tasks and one multitask network which performs both tasks.
# +
# Load the MANN package and TensorFlow
import tensorflow as tf
import mann
# Load the make_classification function from scikit-learn
from sklearn.datasets import make_classification
# +
# We will use two separate generated datasets
x1, y1 = make_classification(
n_samples = 10000,
n_features = 10,
n_informative = 8,
n_classes = 2,
n_clusters_per_class = 1
)
x2, y2 = make_classification(
n_samples = 10000,
n_features = 20,
n_informative = 13,
n_classes = 10,
n_clusters_per_class = 1
)
# Flatten the outputs for simplicity
y1 = y1.reshape(-1, 1)
y2 = y2.reshape(-1, 1)
# Create a callback to stop training early
callback = tf.keras.callbacks.EarlyStopping(min_delta = 0.01, patience = 3, restore_best_weights = True)
# -
# ## Create the first model
#
# This first model is a fully connected model which will perform the first task. It will be pruned utilizing the MANN package so that most of its weights are 0.
# +
# After data generation, create the single-task model using the TensorFlow Keras Functional API
input_layer = tf.keras.Input(x1.shape[-1])
# Instead of using keras Dense Layers, use MANN MaskedDense Layers
x = mann.layers.MaskedDense(
100,
activation = 'relu'
)(input_layer)
for _ in range(5):
x = mann.layers.MaskedDense(
100,
activation = 'relu'
)(x)
# Create the output layer as another MANN MaskedDense Layer
output_layer = mann.layers.MaskedDense(1, activation = 'sigmoid')(x)
# Create the model
model = tf.keras.Model(input_layer, output_layer)
# +
# Compile the model for training and masking
model.compile(
loss = 'binary_crossentropy',
metrics = ['accuracy'],
optimizer = 'adam'
)
# Mask (prune) the model using the MANN package
model = mann.utils.mask_model(
model = model, # The model to be pruned
percentile = 90, # The percentile to be masked, for example, if the value is 90, then 90% of weights will be masked
method = 'gradients', # The method to use to mask, either 'gradients' or 'magnitude'
exclusive = True, # Whether weight locations must be exclusive to each task
x = x1[:2000], # The input data
y = y1[:2000] # The expected outputs
)
# Recompile the model
model.compile(
loss = 'binary_crossentropy',
metrics = ['accuracy'],
optimizer = 'adam'
)
# -
# To show how the layers of the model have been pruned, output the kernel of the first MaskedDense Layer
model.layers[1].get_weights()[0]
# Fit the model on the first dataset
model.fit(x1, y1, batch_size = 128, epochs = 100, validation_split = 0.2, callbacks = [callback])
print(f'First Model Accuracy: {((model.predict(x1)>= 0.5).astype(int).flatten() == y1.flatten()).sum()/y1.shape[0]}')
# ## Create the second model
#
# This second model is a fully connected model which will perform the second task. It will be pruned utilizing the MANN package so that most of its weights are 0.
# +
# Create the second model
input_layer = tf.keras.Input(x2.shape[-1])
# Instead of using keras Dense Layers, use MANN MaskedDense Layers
x = mann.layers.MaskedDense(
100,
activation = 'relu'
)(input_layer)
for _ in range(5):
x = mann.layers.MaskedDense(
100,
activation = 'relu'
)(x)
# Create the output layer as another MANN MaskedDense Layer
output_layer = mann.layers.MaskedDense(10, activation = 'softmax')(x)
# Create the model
model = tf.keras.Model(input_layer, output_layer)
# +
# Repeat the pruning process for the second model
model.compile(loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'], optimizer = 'adam')
model = mann.utils.mask_model(
model = model,
percentile = 90,
method = 'gradients',
exclusive = True,
x = x2[:2000],
y = y2.reshape(-1, 1)[:2000]
)
model.compile(loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'], optimizer = 'adam')
model.fit(x2, y2, epochs = 100, batch_size = 128, validation_split = 0.2, callbacks = [callback])
print(f'Second Model Accuracy: {(model.predict(x2).argmax(axis = 1) == y2.flatten()).astype(int).sum()/y2.shape[0]}')
# -
# ## Create the MANN
#
# The third and final model we create here will be a multitask model (MANN) which performs both tasks.
# +
# Train a Multitask Model
input1 = tf.keras.layers.Input(x1.shape[-1])
input2 = tf.keras.layers.Input(x2.shape[-1])
dense1 = mann.layers.MaskedDense(100, activation = 'relu')(input1)
dense2 = mann.layers.MaskedDense(100, activation = 'relu')(input2)
x = mann.layers.MultiMaskedDense(100, activation = 'relu')([dense1, dense2])
for _ in range(4):
x = mann.layers.MultiMaskedDense(100, activation = 'relu')(x)
sel1 = mann.layers.SelectorLayer(0)(x)
sel2 = mann.layers.SelectorLayer(1)(x)
output1 = mann.layers.MaskedDense(1, activation = 'sigmoid')(sel1)
output2 = mann.layers.MaskedDense(10, activation = 'sigmoid')(sel2)
model = tf.keras.Model([input1, input2], [output1, output2])
model.compile(
loss = ['binary_crossentropy', 'sparse_categorical_crossentropy'],
metrics = ['accuracy'],
optimizer = 'adam'
)
model = mann.utils.mask_model(
model,
90,
method = 'gradients',
exclusive = True,
x = [x1[:2000], x2[:2000]],
y = [y1.reshape(-1, 1)[:2000], y2.reshape(-1, 1)[:2000]]
)
model.compile(
loss = ['binary_crossentropy', 'sparse_categorical_crossentropy'],
metrics = ['accuracy'],
optimizer = 'adam'
)
model.fit([x1, x2], [y1, y2], epochs = 100, batch_size = 128, callbacks = [callback], validation_split = 0.2)
p1, p2 = model.predict([x1, x2])
p1 = (p1 >= 0.5).astype(int)
p2 = p2.argmax(axis = 1)
# -
# # Predict Using the MANN
#
# Now that the MANN model has been trained, we can use it to get predictions just as we would a traditional model. In this case, a list of predictions are returned, with each index corresponding to the task.
print(f'Multitask Task 1 Accuracy: {(p1.flatten() == y1.flatten()).sum()/y1.flatten().shape[0]}')
print(f'Multitask Task 2 Accuracy: {(p2.flatten() == y2.flatten()).sum()/y2.flatten().shape[0]}')
| FullyConnected.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/gpdsec/Residual-Neural-Network/blob/main/Custom_Resnet_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="B8RfLxsnMl_s"
# *It's custom ResNet trained demonstration purpose, not for accuracy.
# Dataset used is cats_vs_dogs dataset from tensorflow_dataset with **Custom Augmentatior** for data augmentation*
#
# ---
#
# + colab={"base_uri": "https://localhost:8080/"} id="4MgEA624smN2" outputId="5873c3d6-9394-4fb2-e827-190845321bac"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="WJFItf1FQFF7"
# ### **1. Importing Libraries**
#
#
#
#
# + id="uT5CUoUogPlA"
import tensorflow as tf
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, BatchNormalization, Input, GlobalMaxPooling2D, add, ReLU
from tensorflow.keras import layers
from tensorflow.keras import Sequential
import tensorflow_datasets as tfds
import pandas as pd
import numpy as np
from tensorflow.keras import Model
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from PIL import Image
from tqdm.notebook import tqdm
import os
import time
# %matplotlib inline
# + [markdown] id="7P1M6AcOQjjE"
# ### **2. Loading & Processing Data**
#
#
#
#
# + [markdown] id="vBNrLBE83NEv"
# ##### **Loading Data**
# + id="EcQDkjdEeQwR"
(train_ds, val_ds, test_ds), info = tfds.load(
'cats_vs_dogs',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True)
# + id="xTl6aKPDOt7K"
## Image preprocessing function
def preprocess(img, lbl):
image = tf.image.resize_with_pad(img, target_height=224, target_width=224)
image = tf.divide(image, 255)
label = [0,0]
if int(lbl) == 1:
label[1]=1
else:
label[0]=1
return image, tf.cast(label, tf.float32)
# + id="-uBvOMzZqEAT"
train_ds = train_ds.map(preprocess)
test_ds = test_ds.map(preprocess)
val_ds = val_ds.map(preprocess)
# + id="5w5w8woc8vl0" colab={"base_uri": "https://localhost:8080/"} outputId="510d8f95-dc72-4d10-b439-b4340a11eb0c"
info
# + id="4NexH_VC_QUL"
# + [markdown] id="cwx8pyNIiDxy"
# #### **Data Augmentation layer**
#
#
#
#
# + id="bkMoFN98qNcT"
###### Important Variables
batch_size = 32
shape = (224, 224, 3)
training_steps = int(18610/batch_size)
validation_steps = int(2326/batch_size)
path = '/content/drive/MyDrive/Colab Notebooks/cats_v_dogs.h5'
# + id="MU2FnjdbziLU"
####### Data agumentation layer
# RandomFlip and RandomRotation Suits my need for Data Agumentation
augmentation=Sequential([
layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"),
layers.experimental.preprocessing.RandomRotation(0.2),
])
####### Data Shuffle and batch Function
def shufle_batch(train_set, val_set, batch_size):
train_set=(train_set.shuffle(1000).batch(batch_size))
train_set = train_set.map(lambda x, y: (augmentation(x, training=True), y))
val_set = (val_set.shuffle(1000).batch(batch_size))
val_set = val_set.map(lambda x, y: (augmentation(x, training=True), y))
return train_set, val_set
train_set, val_set = shufle_batch(train_ds, val_ds, batch_size)
# + [markdown] id="9a8WiBJYvH5O"
# ## **3. Creating Model**
# + [markdown] id="4NmlhIBQtPy7"
# ##### **Creating Residual block**
# + id="fXqH5p9JtPBZ"
def residual_block(x, feature_map, filter=(3,3) , _strides=(1,1), _network_shortcut=False):
shortcut = x
x = Conv2D(feature_map, filter, strides=_strides, activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(feature_map, filter, strides=_strides, activation='relu', padding='same')(x)
x = BatchNormalization()(x)
if _network_shortcut :
shortcut = Conv2D(feature_map, filter, strides=_strides, activation='relu', padding='same')(shortcut)
shortcut = BatchNormalization()(shortcut)
x = add([shortcut, x])
x = ReLU()(x)
return x
# + id="lDQ1SH_DCv5t"
# Build the model using the functional API
i = Input(shape)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(i)
x = BatchNormalization()(x)
x = residual_block(x, 32, filter=(3,3) , _strides=(1,1), _network_shortcut=False)
#x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
#x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = residual_block(x,64, filter=(3,3) , _strides=(1,1), _network_shortcut=False)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)
x = Flatten()(x)
x = Dropout(0.2)(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(2, activation='sigmoid')(x)
model = Model(i, x)
# + id="uYX32wv46PUZ" colab={"base_uri": "https://localhost:8080/"} outputId="1deca391-e0c0-4305-ecc9-820d94d0d6de"
model.compile()
model.summary()
# + [markdown] id="fak0KjtASMdd"
# ### **4. Optimizer and loss Function**
# + id="VboMfD9BoLcm"
# + id="tv2EtPA2j6nn"
loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=False)
Optimiser = tf.keras.optimizers.Adam()
# + [markdown] id="dJDUJLAXSoZn"
# ### **5. Metrics For Loss and Acuracy**
# + id="_ZV_OLFelKHN"
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.BinaryAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name="test_loss")
test_accuracy = tf.keras.metrics.BinaryAccuracy(name='test_accuracy')
# + [markdown] id="Gk8eJelPTHA3"
# ### **6. Function for training and Testing**
# + id="tcbvePZavERI"
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
prediction = model(images, training=True)
loss = loss_object(labels,prediction)
gradient = tape.gradient(loss, model.trainable_variables)
Optimiser.apply_gradients(zip(gradient, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, prediction)
# + id="UHkF3irCnJ4e"
@tf.function
def test_step(images, labels):
prediction = model(images, training = False)
t_loss = loss_object(labels, prediction)
test_loss(t_loss)
test_accuracy(labels, prediction)
# + [markdown] id="aroHrgIrTYSN"
# ### **7. Training Model**
# + colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["2962362ca917410285e897ab0c425e2e", "bc86dd8132ed4a3db53ad78d78dfbd64", "a0926d4993a34c8abd0072ee0161c214", "0007f9a4f3c1463e9a09aa89091d5494", "a6fb3f0ab5ba448eaeddf0c791cdbbed", "edb5367d64a149998fd45f62ee27dbf9", "<KEY>", "3e42dc4f78894f838cee417f405f4184", "3ca375b2e20d452dbcb090d5da022104", "98c4d960de1e4105a0b7d2fce85eb5e8", "<KEY>", "<KEY>", "b89861ed3b6740ae86b11d0b07cc0be8", "<KEY>", "b3c16068c56a4827b30a33be8ab642a1", "<KEY>", "a1be45b2ae644cdab7e890f11b672f56", "<KEY>", "<KEY>", "067cf53ed338411883af477acf123267", "<KEY>", "6c84462f2218420bab0bfade2136dac7", "de2dd1647b8043048caa2bf309e3353f", "<KEY>", "54285112fd064d90b48408f1a756c9ab", "<KEY>", "<KEY>", "<KEY>", "75a0b0ed35cf47e888efc7341b8aada0", "<KEY>", "c8e7ff65b41e477d95d4279af06149de", "48469d9611474ef9b45369606436812a", "<KEY>", "428fa2738841464fb6a65a7716897941", "<KEY>", "<KEY>", "7601e8243908427ca21c5d2ef8c33be9", "<KEY>", "4caa43f159954ed19651f079cdb058ed", "<KEY>", "31da6d67487849d7a298b4d28154b939", "<KEY>", "e56fdc68ef29471da0fe4ffc524be8a5", "9cc652e9fd694e1797a489419c2c6a45", "86e43a2d4fa848808f8c848ecb41785b", "776c18e8664d4e19aea0e6600f1abc55", "5fa07edf407c4d26b4abeb46deb61ba4", "00a7b6b9ceb74159bce9821b247ed130", "f1726fa0a8fd47349dab2dbbed2be382", "<KEY>", "810442c5d5ed4771afcf2f25c5e4b25b", "<KEY>", "<KEY>", "<KEY>", "776a43b47dbe43ccaadc61622961a060", "852aaba4a2a94df2bef2b6337743e756", "e4e139215d3943d09a203df2dffc94b4", "9d8d465a11314c3db861490868d60ed9", "4913eaf46a054d1fb90b171a4c905d63", "3c8f09ff8192453a9907c6efecc1a841", "8e8c03eb883d439d947c633c77679dda", "<KEY>", "79f2a06cda04493d8e353ba65c1241c0", "f8edffa6e1004ab08dfff71e3787463c", "<KEY>", "74a3b51ed10a4da796eb5b076e71acd1", "99affea922ea4a64a07bc5732c75e900", "b00d6bab0d474f29995a31caa09f1fd4", "fe0e8f30ec8c466ab3b7657c7a1c889e", "<KEY>", "<KEY>", "2adee8e476eb4bc0a6d828c691e04c22", "47b9e264219b433a914d060554e7e4d7", "<KEY>", "<KEY>", "6a6701c8e8aa4b448705da361f2a5579", "8b713226a82d46ebbadb08b3a114526c", "de240ebe1f264380baf26f1f67fc9914", "afbe79b2613f466eb33b4a0852c3ac4f", "07a3da08d7604bf986588523f21a241d", "<KEY>", "<KEY>", "02e1ae8065ba47139df4ba6c2130cda1", "601faa025a9941319ca84f2aa651ed08", "668e852ddaa7461fa14c89d32e83fada", "4bbffdd9c13a468ca1adf33f3321534a", "<KEY>", "790aeff0798948b497b1ffece57608ab", "4fe101b0e7644ef1846911f58f3c8203", "<KEY>", "31a7d53e702d4461814e706723300c3f", "<KEY>", "<KEY>", "188e3c453ed84853a2db7f7acab62c02", "3ad38069e86841bea59d911fdee7a9bb", "<KEY>", "<KEY>", "<KEY>", "2396ea43278a4d6c801e6329a1a23b55", "86f2fca20975446784e9dc09602aa526", "b45eea1dfe5f447ead7c9002f12f3712", "a75d55abb6d945a48f89aad9127d1d64", "<KEY>", "2ce33efdf9844027adf70f060fa84aeb", "05c42c1ad4a946538de12426e8944968", "137e2f1a823c4dae8e1156b706a3bce4", "<KEY>", "0baf7eb001aa41bba478357e1d7bbc64", "ea8b05b1194f474ba38577ddd9ab55ac", "ec585ba814a94ffca241d8afc9376d5a", "<KEY>", "ac9763e6c42c4b3eb7b57a3e199feff4", "b35dcf9256754ef7a4fefdde06f43283", "2c58777ac1984e109c4f68a0a1d92356", "478ad26872a54b84a2d3e90d980b012a", "94646e32d24c4926a851d75b711bc59a", "<KEY>", "cc6dfada5a3b4420867ee4180de049f7", "bd0923f854324e6c8436d7847504e1d3", "e6861180218f4ad4a1ac8d27d701a129", "4b1bb6cee73f4ff8a668f9e52d218feb", "0311ecb8b0604ad2a7b8be281db85394", "50ef9561a6e041b6a3c0ec34d0895d01", "<KEY>", "<KEY>", "d4603ac1c1f440c39e43e0a20bbaef08", "<KEY>", "5ae3db4eb8364fb49ef805cb47ea2b1a", "12f3ecea087c40e0b50ac6af085e61ee", "<KEY>", "5f29af32e0a04c16a56f48fb209be59a", "<KEY>", "0ce86a806c2f4da69d6e133daf659699", "b2ad39ad630d4cc39857942dd285e341", "<KEY>", "815044eec9654abea202f32fa892ee26", "0c4e29711e754e44bc2c2bc683ccf54f", "d1ef078f317d4530beeb2328a3941ce3", "402c75570b08483686346143897ec603", "ebec899e475b4ff8896920a4609549a7", "<KEY>", "6af71aeccc9e4ddf8a920356654dd4d8", "6493ed754f7741be881ed5fbfed0835a", "6fdde69c226944ce83ab1f687d2e88ac", "0ca8254f358d451b81ff318d7c1a09fa", "<KEY>", "5413e2c612114da9b5841ebab165823f", "<KEY>", "<KEY>", "<KEY>", "5711b0e07a1949be9fc63a8a133f4c4b", "<KEY>", "<KEY>", "7c605ae764d44e14aa917964c11e081d", "<KEY>", "<KEY>", "1ba78e19709943d3a1564f8dcea7ec7a", "<KEY>", "<KEY>", "<KEY>", "8ca272668856488a94648c3df022a6c1", "7d7f9a08aeca4181817c95f38a134264", "2d1f62f5f38a4b13950897230f0f1e7d", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "b9319655850a4c73bea4d0a659fb0c31", "<KEY>", "<KEY>", "be00d84db60f4668b87a1461f846c40a", "7bba2d30ac6f4e2797d7ef2ded70e714", "<KEY>", "<KEY>", "f2988de0047540fa87335caf9a38322e", "740274d8a05b4aac87494362acab8424", "<KEY>", "4543f3b9669a4d2091db890014f96f73", "b22cc19462324d37966fba226eac7e7d", "2b3925e29b374e0985b5b9ec7d54625d", "d6515e961c6544edbe84f20fbd58a8cd", "<KEY>", "9a931f49f35c4c9da3004c7b3de42be8", "0a80d33f7f534950b04cef39c1f158ad", "<KEY>", "ff048fce9a21485c8ceb90ab8c46d8d7", "ed3d43d891694b73ba1636bdb8b9f1e6", "9f33069ec6154e75a00c9703f69d0680", "67a84913c3c44d8badfc81673fa2c9a1", "<KEY>", "79646ebbaaf24e688ec0afb17e6b5740", "<KEY>", "03ca9052e1f842d38f6ebe81ff7d5f6b", "<KEY>", "<KEY>", "<KEY>", "ae38610937e2462d88406442e284a71d", "0ca6b9132cbe4cf79ff17a6b2ee9de37", "fa9d519896d644fc9284e1a8de13679c", "<KEY>"]} id="rx6KtgRvngNN" outputId="1df669ff-211b-44f8-b2d5-bb5468b17926"
EPOCHS = 25
Train_LOSS = []
TRain_Accuracy = []
Test_LOSS = []
Test_Accuracy = []
for epoch in range(EPOCHS):
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
print(f'Epoch : {epoch+1}')
count = 0 # variable to keep tab how much data steps of training
desc = "EPOCHS {:0>4d}".format(epoch+1)
for images, labels in tqdm(train_set, total=training_steps, desc=desc):
train_step(images, labels)
for test_images, test_labels in val_set:
test_step(test_images, test_labels)
print(
f'Loss: {train_loss.result()}, '
f'Accuracy: {train_accuracy.result()*100}, '
f'Test Loss: {test_loss.result()}, '
f'Test Accuracy: {test_accuracy.result()*100}'
)
Train_LOSS.append(train_loss.result())
TRain_Accuracy.append(train_accuracy.result()*100)
Test_LOSS.append(test_loss.result())
Test_Accuracy.append(test_accuracy.result()*100)
### Saving BestModel
if epoch==0:
min_Loss = test_loss.result()
min_Accuracy = test_accuracy.result()*100
elif (min_Loss>test_loss.result()):
if (min_Accuracy <= test_accuracy.result()*100) :
min_Loss = test_loss.result()
min_Accuracy = ( test_accuracy.result()*100)
print(f"Saving Best Model {epoch+1}")
model.save_weights(path) # Saving Model To drive
# + [markdown] id="GGsV0OJwZB1H"
# ### **8. Ploting Loss and Accuracy Per Iteration**
# + id="A84-OC64WdJ3" colab={"base_uri": "https://localhost:8080/", "height": 299} outputId="088528cf-0629-4b1f-acc0-b5b80fe2db23"
# Plot loss per iteration
plt.plot(Train_LOSS, label='loss')
plt.plot(Test_LOSS, label='val_loss')
plt.title('Plot loss per iteration')
plt.legend()
# + id="6DD26kXBYnwh" colab={"base_uri": "https://localhost:8080/", "height": 299} outputId="fe3960fa-c19c-4cf9-af8a-5bc22bc73895"
# Plot Accuracy per iteration
plt.plot(TRain_Accuracy, label='loss')
plt.plot(Test_Accuracy, label='val_loss')
plt.title('Plot Accuracy per iteration')
plt.legend()
# + [markdown] id="A3qso3V3AV6d"
# ## 9. Evoluting model
# + [markdown] id="rZyD0Lkrulo1"
# ##### **Note-**
# Testing Accuracy of Model with Complete Unseen DataSet.
# + id="fR1dNk401vbZ"
model.load_weights(path)
# + colab={"base_uri": "https://localhost:8080/"} id="faonoCmqGzL5" outputId="feb7a38c-b154-4bc8-8863-1e1a54d92300"
len(test_ds)
# + id="XIQZ-pLjFkB4"
test_set = test_ds.shuffle(50).batch(2326)
# + id="0SWtr9502DYY"
for images, labels in test_set:
prediction = model.predict(images)
break
## Function For Accuracy
def accuracy(prediction, labels):
corect =0
for i in range(len(prediction)):
pred = prediction[i]
labe = labels[i]
if pred[0]>pred[1] and labe[0]>labe[1]:
corect+=1
elif pred[0]<pred[1] and labe[0]<labe[1]:
corect+=1
return (corect/len(prediction))*100
# + id="KRBnYs5nHN3b" colab={"base_uri": "https://localhost:8080/"} outputId="6b7df5c3-99e8-4314-be76-72aa5ab7019a"
print(accuracy(prediction, labels))
# + id="bKNBK8hONSu-"
| Custom_Resnet_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.2 64-bit
# language: python
# name: python3
# ---
# +
import os, sys
PWD = os.getenv('PWD')
os.chdir(PWD)
sys.path.insert(0, os.getenv('PWD'))
print(PWD)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "WMKT.settings")
os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true"
from django.conf import settings
import django
django.setup()
from builder.models import *
demo = Demo_page.objects.all()
print(demo)
| tests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from bridgelib.base import Hand, Deal
from bridgelib.constants import *
from bridgelib.play import dds_solver, DDSResult, LiveResult
from bridgelib.scrape import scrape_ACBL_tournament, scrape_page, write_one_tournament
from bridgelib.score import score_contract, reverse_score, score_array
from bridgelib.bidding import Bidding, PassStrategy
from bridgelib.GIB import find_bid, GIBStrategy
from bridgelib.helpers import array_from_solver_out
from bridgelib.data import gen_dds, read_dds, read_results_file
import numpy as np
import time
# -
for _ in range(100):
write_one_tournament()
for _ in range(5000):
write_one_tournament()
| docs/scrape-tournaments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import rebound as rb
import reboundx as rx
# +
tup_num = 25
e_b = np.linspace(0, 0.7, tup_num)
a_p = np.linspace(1, 5, tup_num)
Np = 4
Qex = []
for x in range(4,7):
Q = 10**x
Qex.append(Q)
tup_list = []
for Q in Qex:
for e in e_b:
for a in a_p:
tup_list.append((Q,e,a,Np))
Ne = len(e_b)
Na = len(a_p)
Nq = len(Qex)
# +
sunnyplot = np.loadtxt('SUNNY_map_tup25plan15_Qi10000_Qf1000000.npy')
fig, ax = plt.subplots(1, Nq, figsize=(20,5), constrained_layout=True)
ax = ax.ravel()
SurvTimeAll = np.reshape(sunnyplot, [Nq,Ne,Na])
SurvTimeArr = [SurvTimeAll[i,:,:] for i in range(Nq)]
for i in range(Nq):
pcm = ax[i].pcolormesh(e_b, a_p, SurvTimeArr[i].T, shading='auto')
a_b = 2.278 + 3.824*e_b - 1.71*(e_b**2)
a_c = 1.6 + 5.1*e_b + (- 2.22*(e_b**2)) + 4.12*0.5 + (- 4.27*e_b*0.5) + (- 5.09*(0.5**2)) + 4.61*(e_b**2)*(0.5**2)
ax[i].plot(e_b, a_c, color='lightsteelblue')
ax[i].scatter(e_b, a_c, color='lightsteelblue')
ax[i].plot(e_b, a_b, color='olive')
ax[i].scatter(e_b, a_b, color='olive')
ax[i].set_title('Q={:.1e}'.format(Qex[i]))
ax[i].set_xlabel('Binary Eccentricity (e)')
ax[i].set_ylabel('Planetary Semi-Major Axis (a)')
ax[i].set_xlim(0.0,0.7)
ax[i].set_ylim(1,5)
plt.colorbar(pcm, location='right',label='Test Particle Survival Times')
# -
sa = None
# +
sa = rx.SimulationArchive('eb0.525_ap4.500_Np15.0_tup25.0_Q10000.0_tau0.0030.bin', rebxfilename='xarchive.bin')
print("Number of snapshots: %d" % len(sa))
# figure out how to access the Hash names, may be useful
# -
print(sa[10][0].particles[0].params["tctl_k1"])
print(sa[25][0].t)
# +
#help(rx.Extras)
# +
# Functions to Plot Evolution of Binary: Eccentricity and Semi-Major Axis
# only actually need to plot these values based on data from second binary as the primary is the center
# of the simulation
def binary_semi(archive):
sim_arc = rb.SimulationArchive(archive)
x_arr = []
y_arr = []
for snap in range(len(sim_arc)):
base = sim_arc[snap].particles[1]
orb_element = base.a
time = (sim_arc[snap].t)
y_arr.append(orb_element)
x_arr.append(time)
fig = plt.figure(figsize=(50,10))
plt.xlabel("Orbit Number", fontsize=35)
plt.ticklabel_format(useOffset=False)
plt.ylabel("Binary Semi-Major Axis", fontsize=35)
plt.xticks(fontsize= 35)
plt.yticks(fontsize= 35)
plt.plot(x_arr,y_arr, color='teal')
plt.set_xlim = ([0.5, 2.])
# -
def binary_ecc(archive):
sim_arc = rb.SimulationArchive(archive)
x_arr = []
y_arr = []
for snap in range(len(sim_arc)):
base = sim_arc[snap].particles[1]
orb_element = base.e
time = (sim_arc[snap].t)
y_arr.append(orb_element)
x_arr.append(time)
fig = plt.figure(figsize=(50,10))
plt.xlabel("Orbit Number", fontsize=35)
plt.ylabel("Binary Eccentricity", fontsize=35)
plt.xticks(fontsize= 35)
plt.yticks(fontsize= 35)
plt.plot(x_arr,y_arr, color='green')
def orbit(archive):
sim_arc = rb.SimulationArchive(archive)
x_arr = []
y_arr = []
for snap in range(len(sim_arc)):
for i in reversed(range(2, sim_arc[snap].N)):
base = sim_arc[snap].particles[i]
orb_element = base.e
time = sim_arc[snap].t
#if orb_element not in y_arr:
y_arr.append(orb_element)
#if time not in x_arr:
x_arr.append(time)
fig = plt.figure(figsize=(50,20))
plt.xlabel("t", fontsize=35)
plt.ylabel("e", fontsize=35)
plt.xticks(fontsize= 35)
plt.yticks(fontsize= 35)
#plt.xlim([0, 650])
#plt.ylim([-55,40])
plt.plot(x_arr,y_arr)
plt.scatter(x_arr,y_arr, color='orange') #, marker=",") #s=2
#print(y_arr)
# +
# Plot Mean Motion vs Spin Rate
# Mean Motion n_b = np.sqrt(G(M1 + M2)/a**3)
def spin_mm(sim, archive):
sim_arc = rx.SimulationArchive(sim,rebxfilename=archive)
#rx.SimulationArchive('eb0.525_ap4.500_Np15.0_tup25.0_Q10000.0_tau0.0030.bin'
# , rebxfilename='xarchive.bin')
x_arr = []
y_arr = []
for snap in range(len(sim_arc)):
base = sim_arc[snap][0]
mean_mtn = base.particles[1].n
spin = base.particles[0].params["Omega"]
# time = (sim_arc[snap].t)/1e4
y_arr.append(mean_mtn)
x_arr.append(spin)
fig = plt.figure(figsize=(50,10))
plt.ticklabel_format(useOffset=False)
plt.xlabel("Spin Rate (Omega)", fontsize=35)
plt.ylabel("Binary Mean Motion (nb)", fontsize=35)
plt.xticks(fontsize= 35)
plt.yticks(fontsize= 35)
plt.plot(x_arr,y_arr, color='green')
# do (x - y) and (y - x) to see the dissipation
# for i in x_arr:
# -
spin_mm("eb0.525_ap4.500_Np15.0_tup25.0_Q10000.0_tau0.0030.bin", "xarchive.bin")
# +
""" Analytic Predictions for Binary Evolution """
# Evolution of Binary Semi-Major Axis
(1/ab)*(dab/dt) = ((4*ab*K_star)/(G*M1*M2))*(N*Omega_star/nb - N_a)
# Evolution of Binary Eccentricity
(1/e)*(de/dt) = (11*ab*K_star)/(G*M1*M2)*(Omega_e*Omega_star/nb - 18/11*Ne)
nb = sim.particles[1].n
R_star = sim.particles[0].r
K_star = (3/2)*k2*tau*(G(M1)**2)*((M2/M1)**2)*((R_star/ab)**6)*nb**2
N_a = (1+31/2*e**2+255/8*e**4+185/16*e**6+25/64*e**8)/(1-e**2)**(15/2)
Omega_e = (1+3/2*e**2+1/8*e**4)/(1-e**2)**5
N_e = (1+15/4*e**2+15/8*e**4+5/64*e**6)/(1-e**2)**(13/2)
| class_sunny.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Permanent versus Persistent Income Shocks
#
# + code_folding=[0]
# Initial imports and notebook setup
# %matplotlib inline
import matplotlib.pyplot as plt
import sys
import os
from copy import copy
from HARK.ConsumptionSaving.ConsGenIncProcessModel import *
import HARK.ConsumptionSaving.ConsumerParameters as Params
from HARK.utilities import plotFuncs
# -
# Classes to solve consumption-saving models with idiosyncratic shocks to income in which shocks are not necessarily fully transitory or fully permanent. $\texttt{ConsGenIncProcessModel}$ extends $\texttt{ConsIndShockModel}$ by explicitly tracking persistent income $p_t$ as a state variable.
#
# This model currently solves two types of model:
# 1. A model with explicit permanent income shock.
# 2. A model allowing (log) persistent income to follow an AR1 process.
#
#
# ## GenIncProcess model
# Unlike in $\texttt{ConsIndShockModel}$, consumers do not necessarily have the same predicted level of persistent income next period as this period (after controlling for growth). Instead, this model allows them to have a generic function $G$ that translates current persistent income into expected next period persistent income (subject to shocks).
#
#
# The agent's problem can be written in Bellman form as:
#
# \begin{eqnarray*}
# v_t(M_t,p_t) &=& \max_{c_t} U(c_t) + \beta (1-\mathsf{D}_{t+1}) \mathbb{E} [v_{t+1}(M_{t+1}, p_{t+1}) ], \\
# a_t &=& M_t - c_t, \\
# a_t &\geq& \underline{a}, \\
# M_{t+1} &=& R a_t + \theta_{t+1}, \\
# p_{t+1} &=& G_{t+1}(p_t)\psi_{t+1}, \\
# \psi_t \sim F_{\psi t} &\qquad& \theta_t \sim F_{\theta t}, \mathbb{E} [F_{\psi t}] = 1, \\
# U(c) &=& \frac{c^{1-\rho}}{1-\rho}.
# \end{eqnarray*}
# The one period problem for this model is solved by the function $\texttt{solveConsGenIncProcess}$, which creates an instance of the class $\texttt{ConsGenIncProcessSolver}$. The class $\texttt{GenIncProcessConsumerType}$ extends $\texttt{IndShockConsumerType}$ to represents agents in this model. To construct an instance of this class, several parameters must be passed to the constructor as shown in the table below (parameters can be either "primitive" or "constructed" if they have already built-in capabilities from previous models).
#
# ### Example parameter values to solve GenIncProcess model
#
# | Param | Description | Code | Value | Constructed |
# | :---: | --- | --- | --- | :---: |
# | $\beta$ |Intertemporal discount factor | $\texttt{DiscFac}$ | 0.96 | |
# | $\rho $ |Coefficient of relative risk aversion | $\texttt{CRRA}$ | 2.0 | |
# | $R $ | Risk free interest factor | $\texttt{Rfree}$ | 1.03 | |
# | $1 - \mathsf{D}$ |Survival probability | $\texttt{LivPrb}$ | [0.98] | |
# | $\underline{a} $ |Artificial borrowing constraint | $\texttt{BoroCnstArt}$ | 0.0 | |
# | $(none) $ |Indicator of whether $\texttt{vFunc}$ should be computed | $\texttt{vFuncBool}$ | 'True' | |
# | $(none)$ |Indicator of whether $\texttt{cFunc}$ should use cubic lines | $\texttt{CubicBool}$ | 'False' | |
# |$F $ |A list containing three arrays of floats, representing a discrete <br> approximation to the income process: <br>event probabilities, persistent shocks, transitory shocks | $\texttt{IncomeDstn}$ | - |$\surd$ |
# | $G$ |Expected persistent income next period | $\texttt{pLvlNextFunc}$ | - | $\surd$ |
# | $ (none) $ |Array of time-varying persistent income levels | $\texttt{pLvlGrid}$ | - |$\surd$ |
# | $ (none) $ | Array of "extra" end-of-period asset values | $\texttt{aXtraGrid}$ | - |$\surd$ |
#
# ### Constructed inputs to solve GenIncProcess
# The "constructed" inputs above are using expected attributes and are drawn on various methods as explained below.
#
#
# * The input $\texttt{IncomeDstn}$ is created by the method $\texttt{updateIncomeProcess}$ which inherits from $\texttt{IndShockConsumerType}$. (*hyperlink to that noteboook*)
#
# * The input $\texttt{pLvlNextFunc}$ is created by the method $\texttt{updatepLvlNextFunc}$ which uses the initial sequence of $\texttt{pLvlNextFunc}$, the mean and standard deviation of the (log) initial permanent income, $\texttt{pLvlInitMean}$ and $\texttt{pLvlInitStd}$.
# In this model, the method creates a trivial $\texttt{pLvlNextFunc}$ attribute with no persistent income dynamics. But we can overwrite it by subclasses in order to make an AR1 income process for example.
#
#
# * The input $\texttt{pLvlGrid}$ is created by the method $\texttt{updatepLvlGrid}$ which updates the grid of persistent income levels for infinite horizon models (cycles=0) and lifecycle models (cycles=1). This method draws on the initial distribution of persistent income, the $\texttt{pLvlNextFuncs}$, $\texttt{pLvlInitMean}$, $\texttt{pLvlInitStd}$ and the attribute $\texttt{pLvlPctiles}$ (percentiles of the distribution of persistent income). It then uses a simulation approach to generate the $\texttt{pLvlGrid}$ at each period of the cycle.
#
#
# * The input $\texttt{aXtraGrid}$ is created by $\texttt{updateAssetsGrid}$ which updates the agent's end-of-period assets grid by constructing a multi-exponentially spaced grid of aXtra values, based on $\texttt{aNrmInitMean}$ and $\texttt{aNrmInitStd}$.
#
# ## 1. Consumer with explicit Permanent income
#
# Let's make an example of our generic model above with an "explicit permanent income" consumer who experiences idiosyncratic shocks to permanent and transitory, and faces permanent income growth.
#
# The agent's problem can be written in Bellman form as:
#
# \begin{eqnarray*}
# v_t(M_t,p_t) &=& \max_{c_t} U(c_t) + \beta (1-\mathsf{D}_{t+1}) \mathbb{E} [v_{t+1}(M_{t+1}, p_{t+1}) ], \\
# a_t &=& M_t - c_t, \\
# a_t &\geq& \underline{a}, \\
# M_{t+1} &=& R/(\Gamma_{t+1} \psi_{t+1}) a_t + \theta_{t+1}, \\
# p_{t+1} &=& G_{t+1}(p_t)\psi_{t+1}, \\
# \psi_t \sim F_{\psi t} &\qquad& \theta_t \sim F_{\theta t}, \mathbb{E} [F_{\psi t}] = 1, \\
# U(c) &=& \frac{c^{1-\rho}}{1-\rho}.
# \end{eqnarray*}
#
#
# This agent type is identical to an $\texttt{IndShockConsumerType}$ but for explicitly tracking $\texttt{pLvl}$ as a state variable during solution as shown in the mathematical representation of GenIncProcess model.
#
# To construct $\texttt{IndShockExplicitPermIncConsumerType}$ as an instance of $\texttt{GenIncProcessConsumerType}$, we need to pass additional parameters to the constructor as shown in the table below.
#
# ### Additional parameters to solve ExplicitPermInc model
#
# | Param | Description | Code | Value | Constructed |
# | :---: | --- | --- | --- | :---: |
# |(none)|percentiles of the distribution of persistent income|$\texttt{pLvlPctiles}$|||
# | $G$ |Expected persistent income next period | $\texttt{pLvlNextFunc}$ | - | $\surd$ |
# |$\Gamma $|Permanent income growth factor|$\texttt{PermGroFac}$|[1.0]| |
#
#
# ### Constructed inputs to solve ExplicitPermInc
#
# * In this "explicit permanent income" model, we overwrite the method $\texttt{updatepLvlNextFunc}$ to create $\texttt{pLvlNextFunc}$ as a sequence of linear functions, indicating constant expected permanent income growth across permanent income levels. This method uses the attribute $\texttt{PermGroFac}$, and installs a special retirement function when it exists.
#
#
# + code_folding=[0]
# This cell defines a dictionary to make an instance of "explicit permanent income" consumer.
GenIncDictionary = {
"CRRA": 2.0, # Coefficient of relative risk aversion
"Rfree": 1.03, # Interest factor on assets
"DiscFac": 0.96, # Intertemporal discount factor
"LivPrb" : [0.98], # Survival probability
"AgentCount" : 10000, # Number of agents of this type (only matters for simulation)
"aNrmInitMean" : 0.0, # Mean of log initial assets (only matters for simulation)
"aNrmInitStd" : 1.0, # Standard deviation of log initial assets (only for simulation)
"pLvlInitMean" : 0.0, # Mean of log initial permanent income (only matters for simulation)
"pLvlInitStd" : 0.0, # Standard deviation of log initial permanent income (only matters for simulation)
"PermGroFacAgg" : 1.0, # Aggregate permanent income growth factor (only matters for simulation)
"T_age" : None, # Age after which simulated agents are automatically killed
"T_cycle" : 1, # Number of periods in the cycle for this agent type
# Parameters for constructing the "assets above minimum" grid
"aXtraMin" : 0.001, # Minimum end-of-period "assets above minimum" value
"aXtraMax" : 30, # Maximum end-of-period "assets above minimum" value
"aXtraExtra" : [0.005,0.01], # Some other value of "assets above minimum" to add to the grid
"aXtraNestFac" : 3, # Exponential nesting factor when constructing "assets above minimum" grid
"aXtraCount" : 48, # Number of points in the grid of "assets above minimum"
# Parameters describing the income process
"PermShkCount" : 7, # Number of points in discrete approximation to permanent income shocks
"TranShkCount" : 7, # Number of points in discrete approximation to transitory income shocks
"PermShkStd" : [0.1], # Standard deviation of log permanent income shocks
"TranShkStd" : [0.1], # Standard deviation of log transitory income shocks
"UnempPrb" : 0.05, # Probability of unemployment while working
"UnempPrbRet" : 0.005, # Probability of "unemployment" while retired
"IncUnemp" : 0.3, # Unemployment benefits replacement rate
"IncUnempRet" : 0.0, # "Unemployment" benefits when retired
"tax_rate" : 0.0, # Flat income tax rate
"T_retire" : 0, # Period of retirement (0 --> no retirement)
"BoroCnstArt" : 0.0, # Artificial borrowing constraint; imposed minimum level of end-of period assets
"CubicBool" : False, # Use cubic spline interpolation when True, linear interpolation when False
"vFuncBool" : True, # Whether to calculate the value function during solution
# More parameters specific to "Explicit Permanent income" shock model
"cycles": 0,
"pLvlPctiles" : np.concatenate(([0.001, 0.005, 0.01, 0.03], np.linspace(0.05, 0.95, num=19),[0.97, 0.99, 0.995, 0.999])),
"PermGroFac" : [1.0], # Permanent income growth factor - long run permanent income growth doesn't work yet
}
# -
# Let's now create an instance of the type of consumer we are interested in and solve this agent's problem with an infinite horizon (cycles=0).
# +
# Make and solve an example "explicit permanent income" consumer with idiosyncratic shocks
ExplicitExample = IndShockExplicitPermIncConsumerType(**GenIncDictionary)
print('Here, the lowest percentile is ' + str(GenIncDictionary['pLvlPctiles'][0]*100))
print('and the highest percentile is ' + str(GenIncDictionary['pLvlPctiles'][-1]*100) + '.\n')
ExplicitExample.solve()
# -
# In the cell below, we generate a plot of the consumption function for explicit permanent income consumer at different income levels.
# + code_folding=[]
# Plot the consumption function at various permanent income levels.
print('Consumption function by pLvl for explicit permanent income consumer:')
pLvlGrid = ExplicitExample.pLvlGrid[0]
mLvlGrid = np.linspace(0,20,300)
for p in pLvlGrid:
M_temp = mLvlGrid + ExplicitExample.solution[0].mLvlMin(p)
C = ExplicitExample.solution[0].cFunc(M_temp,p*np.ones_like(M_temp))
plt.plot(M_temp,C)
plt.xlim(0.,20.)
plt.xlabel('Market resource level mLvl')
plt.ylabel('Consumption level cLvl')
plt.show()
# -
# ## Permanent income normalized
#
# An alternative model is to normalized it by dividing all variables by permanent income $p_t$ and solve the model again.
# Make and solve an example of normalized model
NormalizedExample = IndShockConsumerType(**GenIncDictionary)
NormalizedExample.solve()
# +
# Compare the normalized problem with and without explicit permanent income and plot the consumption functions
print('Normalized consumption function by pLvl for explicit permanent income consumer:')
pLvlGrid = ExplicitExample.pLvlGrid[0]
mNrmGrid = np.linspace(0,20,300)
for p in pLvlGrid:
M_temp = mNrmGrid*p + ExplicitExample.solution[0].mLvlMin(p)
C = ExplicitExample.solution[0].cFunc(M_temp,p*np.ones_like(M_temp))
plt.plot(M_temp/p,C/p)
plt.xlim(0.,20.)
plt.xlabel('Normalized market resources mNrm')
plt.ylabel('Normalized consumption cNrm')
plt.show()
print('Consumption function for normalized problem (without explicit permanent income):')
mNrmMin = NormalizedExample.solution[0].mNrmMin
plotFuncs(NormalizedExample.solution[0].cFunc,mNrmMin,mNrmMin+20.)
# -
# The figures above show that the normalized consumption function for the "explicit permanent income" consumer is almost identical for every permanent income level (and the same as the normalized problem's $\texttt{cFunc}$), but is less accurate due to extrapolation outside the bounds of $\texttt{pLvlGrid}$.
#
# The "explicit permanent income" solution deviates from the solution to the normalized problem because of errors from extrapolating beyond the bounds of the $\texttt{pLvlGrid}$. The error is largest for $\texttt{pLvl}$ values near the upper and lower bounds, and propagates toward the center of the distribution.
#
# Plot the value function at various permanent income levels
if ExplicitExample.vFuncBool:
pGrid = np.linspace(0.1,3.0,24)
M = np.linspace(0.001,5,300)
for p in pGrid:
M_temp = M+ExplicitExample.solution[0].mLvlMin(p)
C = ExplicitExample.solution[0].vFunc(M_temp,p*np.ones_like(M_temp))
plt.plot(M_temp,C)
plt.ylim([-200,0])
plt.xlabel('Market resource level mLvl')
plt.ylabel('Value v')
plt.show()
# Simulate many periods to get to the stationary distribution
ExplicitExample.T_sim = 500
ExplicitExample.track_vars = ['mLvlNow','cLvlNow','pLvlNow']
ExplicitExample.initializeSim()
ExplicitExample.simulate()
plt.plot(np.mean(ExplicitExample.mLvlNow_hist,axis=1))
plt.xlabel('Simulated time period')
plt.ylabel('Average market resources mLvl')
plt.show()
# ## 2. Persistent income shock consumer
#
#
# Class to solve consumption-saving models with idiosyncratic shocks to income in which shocks are persistent and transitory. This model extends $\texttt{ConsGenIndShockModel}$ by allowing (log) persistent income to follow an AR(1) process.
#
# The agent's problem can be written in Bellman form as:
#
# \begin{eqnarray*}
# v_t(M_t,p_t) &=& \max_{c_t} U(c_t) + \beta (1-\mathsf{D}_{t+1}) \mathbb{E} [v_{t+1}(M_{t+1}, p_{t+1}) ], \\
# a_t &=& M_t - c_t, \\
# a_t &\geq& \underline{a}, \\
# M_{t+1} &=& R a_t + \theta_{t+1}, \\
# log(p_{t+1}) &=& \varphi log(p_t)+(1-\varphi log(\overline{p}_{t+1} ) +log(\Gamma_{t+1})+log(\psi_{t+1}), \\
# \\
# \psi_t \sim F_{\psi t} &\qquad& \theta_t \sim F_{\theta t}, \mathbb{E} [F_{\psi t}] = 1 \\
# \end{eqnarray*}
#
# ### Additional parameters to solve PersistentShock model
#
# | Param | Description | Code | Value | Constructed |
# | :---: | --- | --- | --- | :---: |
# |$\varphi$|Serial correlation coefficient for permanent income|$\texttt{PrstIncCorr}$|0.98||
# ||||||
#
# ### Constructed inputs to solve PersistentShock
#
# * For this model, we overwrite the method $\texttt{updatepLvlNextFunc}$ to create the input $\texttt{pLvlNextFunc}$ as a sequence of AR1-style functions. The method uses now the attributes $\texttt{PermGroFac}$ and $\texttt{PrstIncCorr}$. If cycles=0, the product of $\texttt{PermGroFac}$ across all periods must be 1.0, otherwise this method is invalid.
#
# + code_folding=[0]
# Make a dictionary for the "persistent idiosyncratic shocks" model
PrstIncCorr = 0.98 # Serial correlation coefficient for persistent income
persistent_shocks = copy(GenIncDictionary)
persistent_shocks['PrstIncCorr'] = PrstIncCorr
# -
# The $\texttt{PersistentShockConsumerType}$ class solves the problem of a consumer facing idiosyncratic shocks to his persistent and transitory income, and for which the (log) persistent income follows an AR1 process rather than random walk.
# Make and solve an example of "persistent idisyncratic shocks" consumer
PersistentExample = PersistentShockConsumerType(**persistent_shocks)
PersistentExample.solve()
# Plot the consumption function at various levels of persistent income pLvl
print('Consumption function by persistent income level pLvl for a consumer with AR1 coefficient of ' + str(PersistentExample.PrstIncCorr) + ':')
pLvlGrid = PersistentExample.pLvlGrid[0]
mLvlGrid = np.linspace(0,20,300)
for p in pLvlGrid:
M_temp = mLvlGrid + PersistentExample.solution[0].mLvlMin(p)
C = PersistentExample.solution[0].cFunc(M_temp,p*np.ones_like(M_temp))
plt.plot(M_temp,C)
plt.xlim(0.,20.)
plt.xlabel('Market resource level mLvl')
plt.ylabel('Consumption level cLvl')
plt.show()
# Plot the value function at various persistent income levels
if PersistentExample.vFuncBool:
pGrid = PersistentExample.pLvlGrid[0]
M = np.linspace(0.001,5,300)
for p in pGrid:
M_temp = M+PersistentExample.solution[0].mLvlMin(p)
C = PersistentExample.solution[0].vFunc(M_temp,p*np.ones_like(M_temp))
plt.plot(M_temp,C)
plt.ylim([-200,0])
plt.xlabel('Market resource level mLvl')
plt.ylabel('Value v')
plt.show()
# Simulate some data
PersistentExample.T_sim = 500
PersistentExample.track_vars = ['mLvlNow','cLvlNow','pLvlNow']
PersistentExample.initializeSim()
PersistentExample.simulate()
plt.plot(np.mean(PersistentExample.mLvlNow_hist,axis=1))
plt.xlabel('Simulated time period')
plt.ylabel('Average market resources mLvl')
plt.show()
| notebooks/.ipynb_checkpoints/GenIncProcessModel-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import gurobipy as gpy
model = gpy.Model('mulearn-scaled')
x1 = model.addVar(name=f'x1', vtype=gpy.GRB.CONTINUOUS, lb=0, ub=1)
x2 = model.addVar(name=f'x2', vtype=gpy.GRB.CONTINUOUS, lb=0, ub=1)
l0 = 200
l1 = -200
x1.start = .45
x2.start = .55
obj = x1**2 +x2**2 + l0 * (x1 + x2 - 1) + l1 * (1 - x1 - x2)
model.setObjective(obj, gpy.GRB.MAXIMIZE)
#model.setParam("NonConvex", 2)
model.update()
model.optimize()
x1.x
x2.x
model.objVal
import tensorflow as tf
import numpy as np
# # Rifacciamo tutto
# ## Rilassamento lineare del vincolo di = e forma quadratica sviluppata
# +
# E' QUELLO GIUSTO
from tensorflow.keras.optimizers import Adam
def solve_problem(Q, q, A, b, C, d,
max_iter=1000,
max_gap=1E-4,
alpha_0=1E-1,
window_width=30,
learning_rate=1E-2,
verbose=False):
'''Solves the lagrangian relaxation for a constrained optimization
problem and returns its result. The structure of the primal problem
is the following
min x.T Q x + q.T x
subject to
A x = b
C x <= d
where .T denotes the transposition operator. Optimization takes place
in a iterated two-steps procedure: an outer process devoted to modifying
the values of the lagrange multipliers, and an inner process working on
the primal variables.
The arguments are as follows, given n as the number of variables of
the primal problem (i.e., the length of x)
- Q: n x n matrix containing the quadratic coefficients of the cost
function;
- q: vector containing the n linear coefficients of the cost function
- A: s x n matrix containing the coefficients of the = constraints
- b: vector containing the s right members of the = constraints
- C: t x n matrix containing the coefficients of the <= constraings
- d: vector containing the t right members of the <= coefficients
- max_iter: maximum number of iterations of the *outer* optimization
procedure
- max_gap: maximum gap between primal and dual objectives ensuring
premature end of the *outer* optimization procedure
- alpha_0: initial value of the learning rate in the *outer* optimization
procedure
- window_width: width of the moving window on the objective function for
the *inner* optimization process
- verbose: boolean flag triggering verbose output
returns
'''
optimizer = Adam(learning_rate=10**-2)
x = tf.Variable(np.random.random(len(q)),
name='x',
trainable=True,
dtype=tf.float32)
Q = tf.constant(np.array(Q), dtype='float32')
q = tf.constant(np.array(q), dtype='float32')
A = np.array(A)
s = len(A)
C = np.array(C)
b = np.array(b)
d = np.array(d)
M = np.vstack([A, -A, C])
m = np.hstack([b, -b, d])
lambda_ = tf.constant(np.random.random(len(m)), dtype='float32')
M = tf.constant(M, dtype='float32')
m = tf.constant(m, dtype='float32')
def original_objective():
def obj():
return tf.tensordot(tf.linalg.matvec(Q, x), x, axes=1) + \
tf.tensordot(q, x, axes=1)
return obj
def lagrangian_objective(lambda_):
def obj():
return tf.tensordot(tf.linalg.matvec(Q, x), x, axes=1) + \
tf.tensordot(q, x, axes=1) + \
tf.tensordot(lambda_, m - tf.linalg.matvec(M, x), axes=1)
return obj
obj_val = []
lagr_val = []
gap_val = []
gap = max_gap + 1
num_bad_iterations = 0
prev_orig = np.inf
i = 0
while i < max_iter and (gap<0 or gap > max_gap):
lagr_obj = lagrangian_objective(lambda_)
orig_obj = original_objective()
prev_lagr = 10*3
curr_lagr = 0
vals = []
t = 0
window_width = 30
window = list(np.logspace(1, window_width, window_width))
# this is to ensure a high value for the standard deviation
# of the elements to which the window has been initialized
while (np.std(window)/abs(np.mean(window)) > 0.001 or t < 100) \
and t < 1000:
optimizer.minimize(lagr_obj, var_list=x)
prev_lagr = curr_lagr
curr_lagr = lagr_obj().numpy()
vals.append(curr_lagr)
t += 1
window = window[1:]
window.append(curr_lagr)
curr_orig = orig_obj().numpy()
if curr_orig < prev_orig:
num_bad_iterations += 1
prev_orig = curr_orig
obj_val.append(curr_orig)
lagr_val.append(curr_lagr)
subgradient = (m - tf.linalg.matvec(M, x)).numpy()
gap = tf.tensordot(lambda_[:2*s], m[:2*s] - \
tf.linalg.matvec(M[:2*s], x), axes=1).numpy()
gap_val.append(gap)
if verbose and i%10 == 0:
print(f'i={i}, dual={lagr_obj().numpy():.3f}, '
f'prim={orig_obj().numpy():.3f}, gap={gap:.6f}')
alpha = alpha_0 / num_bad_iterations
lambda_ = tf.maximum(0, lambda_ + alpha * subgradient)
i += 1
return obj_val, lagr_val, x, lambda_, gap_val
# +
Q = [[1, 0], [0, 1]]
q = [0, 0]
A = [[1, 1]]
b = [1]
C = [[-1, -1], [1, 0], [0, 1], [-1, 0], [0, -1]]
d = [-1, 1, 1, 0, 0]
obj_val, lagr_val, x, lambda_, gap = solve_problem(Q, q, A, b, C, d, max_gap=10**-4, verbose=True)
# +
import matplotlib.pyplot as plt
plt.plot(np.arange(1, len(obj_val)+1), obj_val, label='primal')
# plt.plot(np.arange(1, len(lagr_val)+1), lagr_val, label='dual')
plt.legend()
plt.show()
# -
plt.plot(np.arange(1, len(gap)+1), gap, label='gap')
plt.ylim(0, 0.3)
plt.legend()
plt.show()
x.numpy()
obj_val[-1]
gap[-1]
# ## Facciamo una prova passo passo
xs = np.array([[-2.68412563, 0.31939725],
[-2.71414169, -0.17700123],
[-2.88899057, -0.14494943],
[-2.74534286, -0.31829898],
[-2.72871654, 0.32675451],
[-2.28085963, 0.74133045],
[-2.82053775, -0.08946138],
[-2.62614497, 0.16338496],
[-2.88638273, -0.57831175],
[-2.6727558 , -0.11377425],
[-2.50694709, 0.6450689 ],
[-2.61275523, 0.01472994],
[-2.78610927, -0.235112 ],
[-3.22380374, -0.51139459],
[-2.64475039, 1.17876464],
[-2.38603903, 1.33806233],
[-2.62352788, 0.81067951],
[-2.64829671, 0.31184914],
[-2.19982032, 0.87283904],
[-2.5879864 , 0.51356031],
[-2.31025622, 0.39134594],
[-2.54370523, 0.43299606],
[-3.21593942, 0.13346807],
[-2.30273318, 0.09870885],
[-2.35575405, -0.03728186],
[-2.50666891, -0.14601688],
[-2.46882007, 0.13095149],
[-2.56231991, 0.36771886],
[-2.63953472, 0.31203998],
[-2.63198939, -0.19696122],
[-2.58739848, -0.20431849],
[-2.4099325 , 0.41092426],
[-2.64886233, 0.81336382],
[-2.59873675, 1.09314576],
[-2.63692688, -0.12132235],
[-2.86624165, 0.06936447],
[-2.62523805, 0.59937002],
[-2.80068412, 0.26864374],
[-2.98050204, -0.48795834],
[-2.59000631, 0.22904384],
[-2.77010243, 0.26352753],
[-2.84936871, -0.94096057],
[-2.99740655, -0.34192606],
[-2.40561449, 0.18887143],
[-2.20948924, 0.43666314],
[-2.71445143, -0.2502082 ],
[-2.53814826, 0.50377114],
[-2.83946217, -0.22794557],
[-2.54308575, 0.57941002],
[-2.70335978, 0.10770608],
[ 1.28482569, 0.68516047],
[ 0.93248853, 0.31833364],
[ 1.46430232, 0.50426282],
[ 0.18331772, -0.82795901],
[ 1.08810326, 0.07459068],
[ 0.64166908, -0.41824687],
[ 1.09506066, 0.28346827],
[-0.74912267, -1.00489096],
[ 1.04413183, 0.2283619 ],
[-0.0087454 , -0.72308191],
[-0.50784088, -1.26597119],
[ 0.51169856, -0.10398124],
[ 0.26497651, -0.55003646],
[ 0.98493451, -0.12481785],
[-0.17392537, -0.25485421],
[ 0.92786078, 0.46717949],
[ 0.66028376, -0.35296967],
[ 0.23610499, -0.33361077],
[ 0.94473373, -0.54314555],
[ 0.04522698, -0.58383438],
[ 1.11628318, -0.08461685],
[ 0.35788842, -0.06892503],
[ 1.29818388, -0.32778731],
[ 0.92172892, -0.18273779],
[ 0.71485333, 0.14905594],
[ 0.90017437, 0.32850447],
[ 1.33202444, 0.24444088],
[ 1.55780216, 0.26749545],
[ 0.81329065, -0.1633503 ],
[-0.30558378, -0.36826219],
[-0.06812649, -0.70517213],
[-0.18962247, -0.68028676],
[ 0.13642871, -0.31403244],
[ 1.38002644, -0.42095429],
[ 0.58800644, -0.48428742],
[ 0.80685831, 0.19418231],
[ 1.22069088, 0.40761959],
[ 0.81509524, -0.37203706],
[ 0.24595768, -0.2685244 ],
[ 0.16641322, -0.68192672],
[ 0.46480029, -0.67071154],
[ 0.8908152 , -0.03446444],
[ 0.23054802, -0.40438585],
[-0.70453176, -1.01224823],
[ 0.35698149, -0.50491009],
[ 0.33193448, -0.21265468],
[ 0.37621565, -0.29321893],
[ 0.64257601, 0.01773819],
[-0.90646986, -0.75609337],
[ 0.29900084, -0.34889781],
[ 2.53119273, -0.00984911],
[ 1.41523588, -0.57491635],
[ 2.61667602, 0.34390315],
[ 1.97153105, -0.1797279 ],
[ 2.35000592, -0.04026095],
[ 3.39703874, 0.55083667],
[ 0.52123224, -1.19275873],
[ 2.93258707, 0.3555 ],
[ 2.32122882, -0.2438315 ],
[ 2.91675097, 0.78279195],
[ 1.66177415, 0.24222841],
[ 1.80340195, -0.21563762],
[ 2.1655918 , 0.21627559],
[ 1.34616358, -0.77681835],
[ 1.58592822, -0.53964071],
[ 1.90445637, 0.11925069],
[ 1.94968906, 0.04194326],
[ 3.48705536, 1.17573933],
[ 3.79564542, 0.25732297],
[ 1.30079171, -0.76114964],
[ 2.42781791, 0.37819601],
[ 1.19900111, -0.60609153],
[ 3.49992004, 0.4606741 ],
[ 1.38876613, -0.20439933],
[ 2.2754305 , 0.33499061],
[ 2.61409047, 0.56090136],
[ 1.25850816, -0.17970479],
[ 1.29113206, -0.11666865],
[ 2.12360872, -0.20972948],
[ 2.38800302, 0.4646398 ],
[ 2.84167278, 0.37526917],
[ 3.23067366, 1.37416509],
[ 2.15943764, -0.21727758],
[ 1.44416124, -0.14341341],
[ 1.78129481, -0.49990168],
[ 3.07649993, 0.68808568],
[ 2.14424331, 0.1400642 ],
[ 1.90509815, 0.04930053],
[ 1.16932634, -0.16499026],
[ 2.10761114, 0.37228787],
[ 2.31415471, 0.18365128],
[ 1.9222678 , 0.40920347],
[ 1.41523588, -0.57491635],
[ 2.56301338, 0.2778626 ],
[ 2.41874618, 0.3047982 ],
[ 1.94410979, 0.1875323 ],
[ 1.52716661, -0.37531698],
[ 1.76434572, 0.07885885],
[ 1.90094161, 0.11662796],
[ 1.39018886, -0.28266094]])
mus = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
x_setosa = xs[:50]
y_setosa = mus[:50]
x_virginica = xs[50:100]
y_virginica = mus[50:100]
x_versicolor = xs[100:]
y_versicolor = mus[100:]
# +
import matplotlib.pyplot as plt
plt.scatter(*x_versicolor.T, c=y_versicolor)
plt.show()
# +
from mulearn.kernel import GaussianKernel
k = GaussianKernel()
c = 2
# -
m = len(xs)
import tensorflow as tf
alphas = [tf.Variable(ch, name=f'alpha_{i}', trainable=True, dtype=tf.float32)
for i, ch in enumerate(np.random.uniform(-0.1, 0.1, m))]
betas = [tf.Variable(ch, name=f'beta_{i}', trainable=True, dtype=tf.float32)
for i, ch in enumerate(np.random.uniform(-0.1, 0.1, m))]
gram = np.array([[k.compute(x1, x2) for x2 in xs] for x1 in xs])
x = alphas + betas
K11 = np.array([[-mu_i * mu_j for mu_j in mus] for mu_i in mus]) * gram
K00 = np.array([[-(1-mu_i) * (1 - mu_j) for mu_j in mus] for mu_i in mus]) * gram
K01 = np.array([[2 * mu_i * (1-mu_j) for mu_j in mus] for mu_i in mus]) * gram
Z = np.zeros((m, m))
Q = -np.vstack((np.hstack((K11, K01)), np.hstack((Z, K00))))
q = -np.hstack((np.diag(gram) * mus, np.diag(gram) * (1 - mus)))
A = np.array([np.hstack((mus, 1-mus))])
b = np.array([1])
C = np.vstack((np.hstack((np.identity(m), np.zeros((m, m)))),
np.hstack((np.zeros((m, m)), -np.identity(m)))))
d = np.hstack((np.zeros(m), - c * np.ones(m)))
# +
x = np.random.random(2*m)
assert(Q.shape == (2*m, 2*m))
assert(len(q) == 2*m)
assert(type(x @ (Q @ x) + q @ x) is np.float64)
assert(A.shape == (1, 2*m))
assert(len(b) == 1)
assert(type(A@x - b) == np.ndarray)
assert(len(A@x - b) == 1)
assert(C.shape == (2*m, 2*m))
assert(len(d) == 2*m)
assert(type(C @ x - d) == np.ndarray)
assert(len(C @ x - d) == 2*m)
# +
import tensorflow as tf
from tensorflow.keras.optimizers import Adam
def solve_lagrange_relaxation(Q, q, A, b, C, d,
max_iter=1000,
max_gap=10**-4,
alpha_0 = 10**-1,
window_width = 30,
verbose=False):
"""Solves the lagrangian relaxation for a constrained optimization
problem and returns its result. The structure of the primal problem
is the following
min x.T Q x + q.T x
subject to
A x = b
C x <= d
where .T denotes the transposition operator. Optimization takes place
in a iterated two-steps procedure: an outer process devoted to modifying
the values of the lagrange multipliers, and an inner process working on
the primal variables.
The arguments are as follows, given n as the number of variables of
the primal problem (i.e., the length of x)
- Q: n x n matrix containing the quadratic coefficients of the cost
function;
- q: vector containing the n linear coefficients of the cost function
- A: s x n matrix containing the coefficients of the = constraints
- b: vector containing the s right members of the = constraints
- C: t x n matrix containing the coefficients of the <= constraings
- d: vector containing the t right members of the <= coefficients
- max_iter: maximum number of iterations of the *outer* optimization
procedure
- max_gap: maximum gap between primal and dual objectives ensuring
premature end of the *outer* optimization procedure
- alpha_0: initial value of the learning rate in the *outer* optimization
procedure
- window_width: width of the moving window on the objective function for
the *inner* optimization process
- verbose: boolean flag triggering verbose output
returns
"""
#TODO add possibility to specify initial values
x = tf.Variable(np.random.random(len(q)),
name='x',
trainable=True,
dtype=tf.float32)
Q = tf.constant(np.array(Q), dtype='float32')
q = tf.constant(np.array(q), dtype='float32')
A = np.array(A)
s = len(A)
C = np.array(C)
b = np.array(b)
d = np.array(d)
M = np.vstack([A, -A, C])
m = np.hstack([b, -b, d])
lambda_ = tf.constant(np.random.random(len(m)), dtype='float32')
M = tf.constant(M, dtype='float32')
m = tf.constant(m, dtype='float32')
def original_objective():
def obj():
return tf.tensordot(tf.linalg.matvec(Q, x), x, axes=1) + \
tf.tensordot(q, x, axes=1)
return obj
def lagrangian_objective(lambda_):
def obj():
return tf.tensordot(tf.linalg.matvec(Q, x), x, axes=1) + \
tf.tensordot(q, x, axes=1) + \
tf.tensordot(lambda_, m - tf.linalg.matvec(M, x), axes=1)
return obj
obj_val = []
lagr_val = []
gap_val = []
gap = max_gap + 1
num_bad_iterations = 0
prev_orig = np.inf
i = 0
while i < max_iter and (gap<0 or gap > max_gap):
lagr_obj = lagrangian_objective(lambda_)
orig_obj = original_objective()
prev_lagr = 10*3
curr_lagr = 0
vals = []
t = 0
window_width = 30
window = list(np.logspace(1, window_width, window_width))
# this is to ensure a high value for the standard deviation
# of the elements to which the window has been initialized
while (np.std(window)/abs(np.mean(window)) > 0.001 or t < 100) \
and t < 1000:
optimizer.minimize(lagr_obj, var_list=x)
prev_lagr = curr_lagr
curr_lagr = lagr_obj().numpy()
vals.append(curr_lagr)
t += 1
window = window[1:]
window.append(curr_lagr)
curr_orig = orig_obj().numpy()
if curr_orig < prev_orig:
num_bad_iterations += 1
prev_orig = curr_orig
obj_val.append(curr_orig)
lagr_val.append(curr_lagr)
subgradient = (m - tf.linalg.matvec(M, x)).numpy()
gap = tf.tensordot(lambda_[:2*s], m[:2*s] - \
tf.linalg.matvec(M[:2*s], x), axes=1).numpy()
gap_val.append(gap)
if verbose and i%10 == 0:
print(f'i={i}, dual={lagr_obj().numpy():.3f}, '
f'prim={orig_obj().numpy():.3f}, '
f'gap={gap:.6f}')
alpha = alpha_0 / num_bad_iterations
lambda_ = tf.maximum(0, lambda_ + alpha * subgradient)
i += 1
return obj_val, lagr_val, x, lambda_, gap_val
# -
sol = solve_lagrange_relaxation(Q, q, A, b, C, d, verbose=True)
from mulearn import FuzzyInductor
# +
import sklearn.datasets as ds
import pandas as pd
import numpy as np
iris_X, iris_y = ds.load_iris(return_X_y=True)
labels = ("Setosa", "Versicolor", "Virginica")
df = pd.DataFrame(iris_X, columns=["Sepal length", "Sepal width",
"Petal length", "Petal width"])
df['Class'] = iris_y
df['Class'] = df['Class'].map(lambda c: labels[c])
n = 20
filt = list(range(n)) + list(range(50, 50+n)) + list(range(100, 100+n))
df = df.iloc[filt]
df.head()
# +
from sklearn.decomposition import PCA
pca_2d = PCA(n_components=2)
iris_X_2d = pca_2d.fit_transform(iris_X)
iris_versicolor = iris_y.copy()
iris_versicolor[iris_versicolor==2] = 0
# +
from mulearn.optimization import TensorFlowSolver
from mulearn.fuzzifier import LinearFuzzifier
opt = tf.optimizers.Adam(learning_rate=1e-4)
fitf = FuzzyInductor(solver=TensorFlowSolver(optimizer=opt),
k=GaussianKernel(sigma=0.3),
fuzzifier=LinearFuzzifier())
fitf.fit(iris_X_2d, iris_versicolor)
# -
fitf.predict(iris_X_2d, alpha=0.5)
iris_versicolor
# +
x_space = np.linspace(-4, 4, 100)
y_space = np.linspace(-2, 2, 100)
X, Y = np.meshgrid(x_space, y_space)
# -
Z = [[fitf.predict([(x, y)])[0] for x in x_space]
for y in y_space]
# +
import matplotlib.cm as cm
plt.scatter(*iris_X_2d.T, color=cm.coolwarm(iris_versicolor*255), s=9, alpha=0.7)
contours = plt.contour(X, Y, Z, [.5, .6, .7, .8, .9], colors='gray')
plt.clabel(contours, inline=True, fontsize=8)
plt.show()
# -
| lagrange-mulearn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Modeling and Simulation in Python
#
# Project 1 example
#
# Copyright 2018 <NAME>
#
# License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
#
# +
# Configure Jupyter so figures appear in the notebook
# %matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
# %config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim library
from modsim import *
# +
from pandas import read_html
filename = '../data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
# -
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
if len(timeseries):
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
un = table2.un / 1e9
census = table2.census / 1e9
empty = TimeSeries()
plot_results(census, un, empty, 'World population estimates')
# ### Why is world population growing linearly?
#
# Since 1970, world population has been growing approximately linearly, as shown in the previous figure. During this time, death and birth rates have decreased in most regions, but it is hard to imagine a mechanism that would cause them to decrease in a way that yields constant net growth year after year. So why is world population growing linearly?
#
# To explore this question, we will look for a model that reproduces linear growth, and identify the essential features that yield this behavior.
#
# Specifically, we'll add two new features to the model:
#
# 1. Age: The current model does not account for age; we will extend the model by including two age groups, young and old, roughly corresponding to people younger or older than 40.
#
# 2. The demographic transition: Birth rates have decreased substantially since 1970. We model this transition with an abrupt change in 1970 from an initial high level to a lower level.
# We'll use the 1950 world population from the US Census as an initial condition, assuming that half the population is young and half old.
half = get_first_value(census) / 2
init = State(young=half, old=half)
# We'll use a `System` object to store the parameters of the model.
system = System(birth_rate1 = 1/18,
birth_rate2 = 1/25,
transition_year = 1970,
mature_rate = 1/40,
death_rate = 1/40,
t_0 = 1950,
t_end = 2016,
init=init)
# Here's an update function that computes the state of the system during the next year, given the current state and time.
def update_func1(state, t, system):
if t <= system.transition_year:
births = system.birth_rate1 * state.young
else:
births = system.birth_rate2 * state.young
maturings = system.mature_rate * state.young
deaths = system.death_rate * state.old
young = state.young + births - maturings
old = state.old + maturings - deaths
return State(young=young, old=old)
# We'll test the update function with the initial condition.
state = update_func1(init, system.t_0, system)
# And we can do one more update using the state we just computed:
state = update_func1(state, system.t_0, system)
# The `run_simulation` function is similar to the one in the book; it returns a time series of total population.
def run_simulation(system, update_func):
"""Simulate the system using any update function.
init: initial State object
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
state = system.init
results[system.t_0] = state.young + state.old
for t in linrange(system.t_0, system.t_end):
state = update_func(state, t, system)
results[t+1] = state.young + state.old
return results
# Now we can run the simulation and plot the results:
results = run_simulation(system, update_func1);
plot_results(census, un, results, 'World population estimates')
# This figure shows the results from our model along with world population estimates from the United Nations Department of Economic and Social Affairs (UN DESA) and the US Census Bureau.
#
# We adjusted the parameters by hand to fit the data as well as possible. Overall, the model fits the data well.
#
# Nevertheless, between 1970 and 2016 there is clear curvature in the model that does not appear in the data, and in the most recent years it looks like the model is diverging from the data.
#
# In particular, the model would predict accelerating growth in the near future, which does not seem consistent with the trend in the data, and it contradicts predictions by experts.
#
# It seems that this model does not explain why world population is growing linear. We conclude that adding two age groups to the model is not sufficient to produce linear growth. Modeling the demographic transition with an abrupt change in birth rate is not sufficient either.
#
# In future work, we might explore whether a gradual change in birth rate would work better, possibly using a logistic function. We also might explore the behavior of the model with more than two age groups.
| code/world_pop_transition2_from_allendowney_github.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Meddebma/pyradiomics/blob/master/3D_Segmentation_eval.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ltF1K3dyrHpd"
# **Setup environment**
# + colab={"base_uri": "https://localhost:8080/"} id="_8RlsQ_Sqvdf" outputId="489c0d9b-e03b-4540-8d85-a6ee88ec8f9c"
# %pip install monai
# %pip install 'monai[nibabel, skimage, pillow, tensorboard, gdown, ignite, torchvision, itk, tqdm, lmdb, psutil]'
# %pip install matplotlib
# #%pip install -q pytorch-lightning
# %pip install pytorch-lightning
# %matplotlib inline
# + id="GN-NZFY7Iw73" colab={"base_uri": "https://localhost:8080/"} outputId="ad1c267d-e5ba-4f21-9686-7406a2fb72fa"
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from monai.utils import set_determinism
from monai.transforms import (
AsDiscrete,
AddChanneld,
Compose,
CropForegroundd,
LoadImaged,
Orientationd,
Lambda,
RandCropByPosNegLabeld,
ScaleIntensityRanged,
Spacingd,
LabelToContour,
KeepLargestConnectedComponent,
ToTensord,
)
from monai.networks.nets import UNet
from monai.networks.layers import Norm
from monai.metrics import compute_meandice, compute_roc_auc
from monai.losses import DiceLoss
from monai.inferers import sliding_window_inference
from monai.data import CacheDataset, list_data_collate
from monai.config import print_config
from monai.apps import download_and_extract
import torch
import pytorch_lightning
from pytorch_lightning.callbacks.model_checkpoint \
import ModelCheckpoint
import matplotlib.pyplot as plt
import tempfile
import shutil
import os
import sys
from glob import glob
import numpy as np
import statsmodels.formula.api as smf
import statsmodels.api as sm
import pandas as pd
import seaborn as sns
from monai.utils import set_determinism
from monai.transforms import (
AsDiscrete,
AddChanneld,
Compose,
CropForegroundd,
KeepLargestConnectedComponent,
LabelToContour,
LoadImaged,
Orientationd,
RandCropByPosNegLabeld,
ScaleIntensityRanged,
Spacingd,
EnsureTyped,
EnsureType,
)
from monai.networks.nets import UNet
from monai.networks.layers import Norm
from monai.metrics import DiceMetric, compute_hausdorff_distance
from monai.losses import DiceLoss
from monai.inferers import sliding_window_inference
from monai.data import CacheDataset, DataLoader, decollate_batch, NiftiSaver
from monai.config import print_config
from monai.apps import download_and_extract
import torch
import matplotlib.pyplot as plt
import tempfile
import shutil
import os
import glob
print_config()
# + colab={"base_uri": "https://localhost:8080/"} id="zZPvqOR0IPkn" outputId="74ce29e1-6b38-4081-d1e2-51010802440c"
eval_dir= "/content/drive/My Drive/Spleen_AI/CH Data"
images = sorted(glob.glob(os.path.join(eval_dir, "imagesTs", "*.nii.gz")))
labels = sorted(glob.glob(os.path.join(eval_dir, "labelsTs", "*.nii.gz")))
test = [{"image": image_name, "label": label_name}
for image_name, label_name in zip(images, labels)]
val_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
AddChanneld(keys=["image", "label"]),
Spacingd(
keys=["image", "label"],
pixdim=(1.5, 1.5, 2.0),
mode=("bilinear", "nearest"),
),
Orientationd(keys=["image", "label"], axcodes="PLI"),
ScaleIntensityRanged(
keys=["image"], a_min=-57, a_max=164,
b_min=0.0, b_max=1.0, clip=True,
),
CropForegroundd(keys=["image", "label"], source_key="image"),
EnsureTyped(keys=["image", "label"]),
]
)
val_ds = CacheDataset(
data=test, transform=val_transforms,
cache_rate=1.0, num_workers=2,
)
val_loader = torch.utils.data.DataLoader(val_ds, batch_size=1, num_workers=2)
from monai.metrics import DiceMetric
from monai.transforms import Activations, AddChannel, AsDiscrete, Compose, ScaleIntensity, ToTensor
dice_metric = DiceMetric(include_background=True, reduction="mean")
#post_trans = Compose([Activations(sigmoid=True), AsDiscrete(threshold_values=True), KeepLargestConnectedComponent(applied_labels=[1],connectivity=1)])
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="JDsfyw6ErGF_" outputId="167b54de-dde6-48db-fea0-18b95b3dbb39"
post_pred = Compose([EnsureType(), AsDiscrete(argmax=True, to_onehot=True, n_classes=2)])
post_label = Compose([EnsureType(), AsDiscrete(to_onehot=True, n_classes=2)])
post_trans = Compose([Activations(sigmoid=True), AsDiscrete(threshold_values=True)])
device = torch.device("cuda:0")
model = UNet(
dimensions=3,
in_channels=1,
out_channels=2,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
).to(device)
model.load_state_dict(torch.load("/content/drive/My Drive/Spleen_AI/best_metric_model_ch_500.pth"))
model.eval()
with torch.no_grad():
metric_sum = 0.0
metric_count = 0
#saver = NiftiSaver(output_dir="/content/drive/My Drive/Spleen_AI/CH Data/mask")
for i, val_data in enumerate(val_loader):
roi_size = (160, 160, 160)
sw_batch_size = 4
val_data = val_data["image"].to(device)
val_output = sliding_window_inference(
val_data, roi_size, sw_batch_size, model)
# plot the slice [:, :, 80]
plt.figure("check", (20, 4))
plt.subplot(1, 5, 1)
plt.title(f"image {i}")
plt.imshow(val_data.detach().cpu()[0, 0, :, :, 80], cmap="gray")
plt.subplot(1, 5, 2)
plt.title(f"argmax {i}")
argmax = [AsDiscrete(argmax=True)(i) for i in decollate_batch(val_output)]
plt.imshow(argmax[0].detach().cpu()[0, :, :, 80])
plt.subplot(1, 5, 3)
plt.title(f"largest {i}")
largest = [KeepLargestConnectedComponent(applied_labels=[1])(i) for i in argmax]
plt.imshow(largest[0].detach().cpu()[0, :, :, 80])
plt.subplot(1, 5, 4)
plt.title(f"contour {i}")
contour = [LabelToContour()(i) for i in largest]
plt.imshow(contour[0].detach().cpu()[0, :, :, 80])
plt.subplot(1, 5, 5)
plt.title(f"map image {i}")
map_image = contour[0] + val_data[0]
plt.imshow(map_image.detach().cpu()[0, :, :, 80], cmap="gray")
plt.show()
for val_data in val_loader:
val_images, val_labels = val_data["image"].to(device), val_data["label"].to(device)
val_outputs = sliding_window_inference(val_data["image"].to(device), roi_size, sw_batch_size, model)
val_outputs = post_trans(val_outputs)
shape=val_images.shape
vols = [(b[:,1]==1).sum() for b in val_outputs]
vol_label = [(x==1).sum() for x in val_labels]
value = compute_meandice(y_pred=val_outputs, y=val_labels,include_background=False)
hausdorff = compute_hausdorff_distance(y_pred=val_outputs, y=val_labels, include_background=False)
metric_count += len(value)
metric_sum += value.item() * len(value)
print(f"shape:", shape)
print(f"val_dice:", value[0])
print(f"Hausdorff:",hausdorff[0])
print(f"Volume segmentation:", vols)
print(f"Volume label:", vol_label)
#print(f"Shape:", shape)
#print(f"Volume:", vols, f"mm3")
#saver.save_batch(val_output[:, :, ...], val_data["image_meta_dict"])
# + id="sQttUi_InYHQ"
print(val_data["image_meta_dict"]['filename_or_obj'])
| 3D_Segmentation_eval.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Aprendizaje no supervisado
# + [markdown] slideshow={"slide_type": "subslide"}
# - Es un aspecto del Machine Learning que consiste en extraer información oculta en datos sin etiquetar.
# - Dado que las instancias que usamos en el algoritmo deaprendizaje son instancias sin etiquetar, no existe una señal de error que podamos usar para guiar el ajuste
# + [markdown] slideshow={"slide_type": "subslide"}
# Algunas aproximaciones:
# - Clustering
# - Blind Signal Separation for Dimensional Reduction (PCAs)
# + [markdown] slideshow={"slide_type": "subslide"}
# <a href="https://chatbotsmagazine.com/lets-know-supervised-and-unsupervised-in-an-easy-way-9168363e06ab"><img src="https://miro.medium.com/max/700/0*4XR2f4mAYq26mwXp"></a>
# + [markdown] slideshow={"slide_type": "slide"}
# ## Clustering
# + [markdown] slideshow={"slide_type": "subslide"}
# - Clustering es la tarea de agrupar un conjunto de instancias en subgrupos (llamados clusters) de tal forma que las instancias pertenecientes al mismo cluster están más próximas entre sí de acuerdo a alguna característica
# - Usan una medida de distancia para determinar cómo de cerca están dos instancias entre sí
# + [markdown] slideshow={"slide_type": "subslide"}
# Hay distintos tipos de clustering
# - Jerárquico
# - Basado en centroides
# - Basado en distribución
# - Basado en densidad
# + [markdown] slideshow={"slide_type": "subslide"}
# ### KMeans
# - Es un algoritmo que permite dividir los objetos en k conjuntos, siendo k<n (número de instancias)
# - Asume que los atributos forman un espacio de vectores e intenta conseguir una teselación de Voronoi
# + [markdown] slideshow={"slide_type": "subslide"}
# 1. Elegir el número de clusters (k)
# 2. Inicializar k clusters con k centroides escogidos aleatoriamente de entre todos los puntos
# 3. Asignamos cada uno de los puntos restantes al centroide más cercano
# 4. Tomamos sucesivamente cada punto y calculamos su distancia al resto de centroides.
# - Si encontramos un centroide distinto con distancia menor, movemos el punto a ese cluster y actualizamos los centroides de ambos clusters
# 5. Repetimos el paso 4 hasta que realicemos una pasada por todos los puntos sin hacer reasignaciones de puntos
# + [markdown] slideshow={"slide_type": "subslide"}
# <img src="../images/kmeans_example.png">
# + [markdown] slideshow={"slide_type": "subslide"}
# Algunos casos de uso son:
#
# - Segmentación por Comportamiento: relacionar el carrito de compras de un usuario, sus tiempos de acción e información del perfil.
# - Categorización de Inventario: agrupar productos por actividad en sus ventas
# - Detectar anomalías o actividades sospechosas: según el comportamiento en una web reconocer un troll -o un bot- de un usuario normal
# + slideshow={"slide_type": "subslide"}
# Author: <NAME> <<EMAIL>>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
plt.figure(figsize=(12, 12))
n_samples = 1500
random_state = 170
X, y = make_blobs(n_samples=n_samples, random_state=random_state)
plt.scatter(X[:, 0], X[:, 1])
# + slideshow={"slide_type": "subslide"}
# Incorrect number of clusters
kmeans = KMeans(n_clusters=3, random_state=random_state)
y_pred = kmeans.fit_predict(X)
C = kmeans.cluster_centers_
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
plt.scatter(C[:, 0], C[:, 1], marker='*', c='red', s=200)
plt.show()
# + slideshow={"slide_type": "subslide"}
# Anisotropicly distributed data
transformation = [[0.60834549, -0.63667341], [-0.40887718, 0.85253229]]
X_aniso = np.dot(X, transformation)
plt.scatter(X_aniso[:, 0], X_aniso[:, 1])
plt.show()
# + slideshow={"slide_type": "subslide"}
kmeans = KMeans(n_clusters=3, random_state=random_state)
y_pred = kmeans.fit_predict(X_aniso)
C = kmeans.cluster_centers_
plt.scatter(X_aniso[:, 0], X_aniso[:, 1], c=y_pred)
plt.scatter(C[:, 0], C[:, 1], marker='*', c='red', s=200)
plt.title("Anisotropicly Distributed Blobs")
# + slideshow={"slide_type": "subslide"}
# Different variance
X_varied, y_varied = make_blobs(n_samples=n_samples,
cluster_std=[1.0, 2.5, 0.5],
random_state=random_state)
plt.scatter(X_varied[:, 0], X_varied[:, 1])
plt.show()
# + slideshow={"slide_type": "subslide"}
kmeans = KMeans(n_clusters=3, random_state=random_state)
y_pred = kmeans.fit_predict(X_varied)
C = kmeans.cluster_centers_
plt.scatter(X_varied[:, 0], X_varied[:, 1], c=y_pred)
plt.scatter(C[:, 0], C[:, 1], marker='*', c='red', s=200)
plt.title("Unequal Variance")
# + slideshow={"slide_type": "subslide"}
# Unevenly sized blobs
X_filtered = np.vstack((X[y == 0][:500], X[y == 1][:100], X[y == 2][:10]))
plt.scatter(X_filtered[:, 0], X_filtered[:, 1])
plt.show()
# + slideshow={"slide_type": "subslide"}
kmeans = KMeans(n_clusters=3, random_state=random_state)
y_pred = kmeans.fit_predict(X_filtered)
C = kmeans.cluster_centers_
plt.scatter(X_filtered[:, 0], X_filtered[:, 1], c=y_pred)
plt.scatter(C[:, 0], C[:, 1], marker='*', c='red', s=200)
plt.title("Unevenly Sized Blobs")
plt.show()
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Buscar el valor de K
# + slideshow={"slide_type": "subslide"}
Nc = range(1, 20)
kmeans = [KMeans(n_clusters=i) for i in Nc]
score = [kmeans[i].fit(X).score(X) for i in range(len(kmeans))]
score
# + slideshow={"slide_type": "subslide"}
plt.plot(Nc,score)
plt.xlabel('Number of Clusters')
plt.ylabel('Score')
plt.title('Elbow Curve')
plt.show()
| 9-aprendizaje-no-supervisado.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # The final project
#
# The final project is a group project. Groups will pose an original question, get and clean the data, and then conduct analysis.
#
# There are two main deliverables:
# 1. A website presenting your findings
# 2. A presentation given to your classmates
#
# More details will be posted soon, but you can ask me in person for an update anytime!
#
# 
| content/assignments/project-placeholder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="qnC6-B-KUBWO"
# # Author : <NAME>
#
# ## Task 2 : Prediction using Unsupervised Machine Learning
# ## GRIP @ The Sparks Foundation
#
# In this K-means clustering task I tried to predict the optimum number of clusters and represent it visually from the given ‘Iris’ dataset.
# + [markdown] id="q6baTYIeVZKC"
# ## **K-Means**
#
# K-means is a centroid-based algorithm, or a distance-based algorithm, where we calculate the distances to assign a point to a cluster. In K-Means, each cluster is associated with a centroid.
#
#
# + [markdown] id="SuGMpo8X7ibG"
# ### IMPORTING REQUIRED LIBRARIES & LOADING DATASET
# + colab={"base_uri": "https://localhost:8080/"} id="kO_1kOEGDTws" outputId="a51ca200-0e21-4add-d6f5-6642ce349d51"
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import datasets
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
# Load the iris dataset
iris = datasets.load_iris()
iris_df = pd.DataFrame(iris.data, columns = iris.feature_names)
print(iris_df.head()) # See the first 5 rows
# + [markdown] id="q_pPmK9GIKMz"
# ### DETERMINING THE OPTIMUM NUMBER OF CLUSTERS USING THE ELBOW METHOD
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="WevSKogFEalU" outputId="a1ec8a67-b48d-4ee8-bb5b-8c8b5dc958cb"
# Finding the optimum number of clusters for k-means classification
x = iris_df.iloc[:, [0, 1, 2, 3]].values
from sklearn.cluster import KMeans
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
# Plotting the results onto a line graph,
# `allowing us to observe 'The elbow'
plt.plot(range(1, 11), wcss)
plt.title('The elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS') # Within cluster sum of squares
plt.show()
# + [markdown] id="IUXmLTh4Ih6r"
# #### CONCLUSION - OPTIMUM NUMBER OF CLUSTERS IS **3**
# + [markdown] id="3Q1CEFiq9GY1"
# ### CREATING THE KMEANS CLASSIFIER
# + id="aJbyXuNGIXI9"
# Applying kmeans to the dataset / Creating the kmeans classifier
kmeans = KMeans(n_clusters = 3, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
y_kmeans = kmeans.fit_predict(x)
# + [markdown] id="jV-UpnZq-9-n"
# ### PLOTING THE CLUSTERS
# + colab={"base_uri": "https://localhost:8080/", "height": 285} id="Q42-XPJjIyXv" outputId="23e099c0-968a-4ed5-f5d5-7bd445ff20a2"
#Visualising the clusters
plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1], s = 100, c = 'purple', label = 'Iris-setosa')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1], s = 100, c = 'orange', label = 'Iris-versicolour')
plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1], s = 100, c = 'green', label = 'Iris-virginica')
#Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1], s = 100, c = 'red', label = 'Centroids')
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 846} id="iCT88_YuTpjo" outputId="95c0a685-68d9-4813-e864-f5c3615f3518"
# 3d scatterplot using matplotlib
fig = plt.figure(figsize = (15,15))
ax = fig.add_subplot(111, projection='3d')
plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1], s = 50, c = 'purple', label = 'Iris-setosa')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1], s = 50, c = 'orange', label = 'Iris-versicolour')
plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1], s = 50, c = 'green', label = 'Iris-virginica')
#Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1], s = 50, c = 'red', label = 'Centroids')
plt.show()
# + [markdown] id="Qn1mBYrV3Htu"
# ### Labeling the predictions
# + id="sMRBswry3D4c"
#considering 0 Corresponds to 'Iris-setosa'
#1 to 'Iris-versicolour'
#2 to 'Iris-virginica'
y_kmeans = np.where(y_kmeans==0, 'Iris-setosa', y_kmeans)
y_kmeans = np.where(y_kmeans=='1', 'Iris-versicolour', y_kmeans)
y_kmeans = np.where(y_kmeans=='2', 'Iris-virginica', y_kmeans)
# + [markdown] id="XwjEkBbl3STI"
# ### Adding the prediction to the dataset
# + colab={"base_uri": "https://localhost:8080/"} id="mTQg_p_13Z_I" outputId="7b51d499-f986-44cc-a067-9f696520e0de"
data_with_clusters = iris_df.copy()
data_with_clusters["Cluster"] = y_kmeans
print(data_with_clusters.head(5))
# + [markdown] id="FLKfXzbG633x"
# # DATA VISUALIZATION
# + [markdown] id="BVGLZF-jEF6p"
# ### BARPLOT- CLUSTER DISTRIBUTION
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="xCBDZYXOED3H" outputId="ab146f95-8eb3-44e9-9e2e-1c4dae825e8f"
# Bar plot
sns.set_style('darkgrid')
sns.barplot(x = data_with_clusters["Cluster"] .unique(),
y = data_with_clusters["Cluster"] .value_counts(),
palette=sns.color_palette(["#e74c3c", "#34495e", "#2ecc71"]));
# + [markdown] id="AmxoLiHxE0og"
# ### **Bar Plot Inference** -
# There are around 62 iris-versicolour , 50 Iris-virginica and roughly 38 Iris-setosa samples in the dataset as predicted.
# + [markdown] id="NDxw2lOE_Vod"
# ### Violin plot
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="15kgokKE_eJ8" outputId="134e6fd5-4a23-4c82-e27d-1bc52cdd4023"
sns.violinplot(x="Cluster",y="petal width (cm)",data=data_with_clusters)
plt.show()
sns.violinplot(x="Cluster",y="sepal width (cm)",data=data_with_clusters)
plt.show()
sns.violinplot(x="Cluster",y="petal length (cm)",data=data_with_clusters)
plt.show()
sns.violinplot(x="Cluster",y="sepal length (cm)",data=data_with_clusters)
plt.show()
# + [markdown] id="2-yOxJnB7H2J"
# ### PAIRPLOT
# + colab={"base_uri": "https://localhost:8080/", "height": 726} id="nHpTIY0wFktI" outputId="08719d03-24a4-42d8-e383-9dc62a055b83"
### hue = species colours plot as per species
### It will give 3 colours in the plot
sns.set_style('whitegrid') ### Sets grid style
sns.pairplot(data_with_clusters,hue = 'Cluster');
# + [markdown] id="53DGBbMY9tB-"
# ### **PairPlot insights**
#
#
#
# 1. petal-length and petal-width seem to be positively correlated(seem to be having a linear relationship).
# 2. Iris-Setosa seems to have smaller petal length and petal width as compared to others.
# 3. Looking at the overall scenario, it seems to be the case that Iris-Setosa has smaller dimensions than other flowers.
#
#
#
#
#
# + [markdown] id="dFEMX_GEUcpC"
# # **THANK YOU!**
| Task 2 Kmeans Clustering/Task-2_KMeans_Clustering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # Working with large data using Datashader
#
# The various plotting-library backends supported by HoloViews, such as Matplotlib and Bokeh, each have a variety of limitations on the amount of data that is practical to work with. Bokeh in particular mirrors your data directly into an HTML page viewable in your browser, which can cause problems when data sizes approach the limited memory available for each web page in current browsers.
#
# Luckily, a visualization of even the largest dataset will be constrained by the resolution of your display device, and so one approach to handling such data is to pre-render or rasterize the data into a fixed-size array or image *before* sending it to the backend. The [Datashader](https://github.com/bokeh/datashader) library provides a high-performance big-data server-side rasterization pipeline that works seamlessly with HoloViews to support datasets that are orders of magnitude larger than those supported natively by the plotting-library backends, including millions or billions of points even on ordinary laptops.
#
# Here, we will see how and when to use Datashader with HoloViews Elements and Containers. For simplicity in this discussion we'll focus on simple synthetic datasets, but the [Datashader docs](http://datashader.org/topics) include a wide variety of real datasets that give a much better idea of the power of using Datashader with HoloViews, and [PyViz.org](http://pyviz.org) shows how to install and work with HoloViews and Datashader together.
#
# <style>.container { width:100% !important; }</style>
# +
import datashader as ds
import numpy as np
import holoviews as hv
from holoviews import opts
from holoviews.operation.datashader import datashade, shade, dynspread, rasterize
from holoviews.operation import decimate
hv.extension('bokeh','matplotlib')
decimate.max_samples=1000
dynspread.max_px=20
dynspread.threshold=0.5
def random_walk(n, f=5000):
"""Random walk in a 2D space, smoothed with a filter of length f"""
xs = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum()
ys = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum()
xs += 0.1*np.sin(0.1*np.array(range(n-1+f))) # add wobble on x axis
xs += np.random.normal(0, 0.005, size=n-1+f) # add measurement noise
ys += np.random.normal(0, 0.005, size=n-1+f)
return np.column_stack([xs, ys])
def random_cov():
"""Random covariance for use in generating 2D Gaussian distributions"""
A = np.random.randn(2,2)
return np.dot(A, A.T)
def time_series(T = 1, N = 100, mu = 0.1, sigma = 0.1, S0 = 20):
"""Parameterized noisy time series"""
dt = float(T)/N
t = np.linspace(0, T, N)
W = np.random.standard_normal(size = N)
W = np.cumsum(W)*np.sqrt(dt) # standard brownian motion
X = (mu-0.5*sigma**2)*t + sigma*W
S = S0*np.exp(X) # geometric brownian motion
return S
# -
# <center><div class="alert alert-info" role="alert">This notebook makes use of dynamic updates, which require a running a live Jupyter or Bokeh server.<br>
# When viewed statically, the plots will not update fully when you zoom and pan.<br></div></center>
# # Principles of datashading
#
# Because HoloViews elements are fundamentally data containers, not visualizations, you can very quickly declare elements such as ``Points`` or ``Path`` containing datasets that may be as large as the full memory available on your machine (or even larger if using Dask dataframes). So even for very large datasets, you can easily specify a data structure that you can work with for making selections, sampling, aggregations, and so on. However, as soon as you try to visualize it directly with either the matplotlib or bokeh plotting extensions, the rendering process may be prohibitively expensive.
#
# Let's start with a simple example we can visualize as normal:
# +
np.random.seed(1)
points = hv.Points(np.random.multivariate_normal((0,0), [[0.1, 0.1], [0.1, 1.0]], (1000,)),label="Points")
paths = hv.Path([random_walk(2000,30)], label="Paths")
points + paths
# -
# These browser-based plots are fully interactive, as you can see if you select the Wheel Zoom or Box Zoom tools and use your scroll wheel or click and drag.
#
# Because all of the data in these plots gets transferred directly into the web browser, the interactive functionality will be available even on a static export of this figure as a web page. Note that even though the visualization above is not computationally expensive, even with just 1000 points as in the scatterplot above, the plot already suffers from [overplotting](https://anaconda.org/jbednar/plotting_pitfalls), with later points obscuring previously plotted points.
#
# With much larger datasets, these issues will quickly make it impossible to see the true structure of the data. We can easily declare 50X or 1000X larger versions of the same plots above, but if we tried to visualize them they would be nearly unusable even if the browser did not crash:
# +
np.random.seed(1)
points = hv.Points(np.random.multivariate_normal((0,0), [[0.1, 0.1], [0.1, 1.0]], (1000000,)),label="Points")
paths = hv.Path([0.15*random_walk(100000) for i in range(10)],label="Paths")
#points + paths ## Danger! Browsers can't handle 1 million points!
# -
# Luckily, HoloViews Elements are just containers for data and associated metadata, not plots, so HoloViews can generate entirely different types of visualizations from the same data structure when appropriate. For instance, in the plot on the left below you can see the result of applying a `decimate()` operation acting on the `points` object, which will automatically downsample this million-point dataset to at most 1000 points at any time as you zoom in or out:
decimate(points) + datashade(points) + datashade(paths)
# Decimating a plot in this way can be useful, but it discards most of the data, yet still suffers from overplotting. If you have Datashader installed, you can instead use the `datashade()` operation to create a dynamic Datashader-based Bokeh plot. The middle plot above shows the result of using `datashade()` to create a dynamic Datashader-based plot out of an Element with arbitrarily large data. In the Datashader version, a new image is regenerated automatically on every zoom or pan event, revealing all the data available at that zoom level and avoiding issues with overplotting by dynamically rescaling the colors used. The same process is used for the line-based data in the Paths plot.
#
# These two Datashader-based plots are similar to the native Bokeh plots above, but instead of making a static Bokeh plot that embeds points or line segments directly into the browser, HoloViews sets up a Bokeh plot with dynamic callbacks that render the data as an RGB image using Datashader instead. The dynamic re-rendering provides an interactive user experience even though the data itself is never provided directly to the browser. Of course, because the full data is not in the browser, a static export of this page (e.g. on holoviews.org or on anaconda.org) will only show the initially rendered version, and will not update with new images when zooming as it will when there is a live Python process available.
#
# Though you can no longer have a completely interactive exported file, with the Datashader version on a live server you can now change the number of data points from 1000000 to 10000000 or more to see how well your machine will handle larger datasets. It will get a bit slower, but if you have enough memory, it should still be very usable, and should never crash your browser as transferring the whole dataset into your browser would. If you don't have enough memory, you can instead set up a [Dask](http://dask.pydata.org) dataframe as shown in other Datashader examples, which will provide out-of-core and/or distributed processing to handle even the largest datasets.
#
# The `datashade()` operation is actually a "macro" or shortcut that combines the two main computations done by datashader, namely `shade()` and `rasterize()`:
rasterize(points).hist() + shade(rasterize(points)) + datashade(points)
# In all three of the above plots, `rasterize()` is being called to aggregate the data (a large set of x,y locations) into a rectangular grid, with each grid cell counting up the number of points that fall into it. In the plot on the left, only `rasterize()` is done, and the resulting numeric array of counts is passed to Bokeh for colormapping. Bokeh can then use dynamic (client-side, browser-based) operations in JavaScript, allowing users to have dynamic control over even static HTML plots. For instance, in this case, users can use the Box Select tool and select a range of the histogram shown, dynamically remapping the colors used in the plot to cover the selected range.
#
# The other two plots should be identical. In both cases, the numerical array output of `rasterize()` is mapped into RGB colors by Datashader itself, in Python ("server-side"), which allows special Datashader computations like the histogram-equalization in the above plots and the "spreading" discussed below. The `shade()` and `datashade()` operations accept a `cmap` argument that lets you control the colormap used, which can be selected to match the HoloViews/Bokeh `cmap` option but is strictly independent of it. See ``hv.help(rasterize)``, ``hv.help(shade)``, and ``hv.help(datashade)`` for options that can be selected, and the [Datashader web site](http://datashader.org) for all the details. The lower-level `aggregate()` and `regrid()` give more control over how the data is aggregated.
#
# Since datashader only sends the data currently in view to the plotting backend, the default behavior is to rescale colormap to the range of the visible data as the zoom level changes. This behavior may not be desirable when working with images; to instead use a fixed colormap range, the `clim` parameter can be passed to the `bokeh` backend via the `opts()` method. Note that this approach works with `rasterize()` where the colormapping is done by the `bokeh` backend. With `datashade()`, the colormapping is done with the `shade()` function which takes a `clims` parameter directly instead of passing additional parameters to the backend via `opts()`.
# +
n = 10_000
# Strong signal on top
rs = np.random.RandomState(101010)
x = rs.pareto(n, n)
y = x + rs.standard_normal(n)
img1, *_ = np.histogram2d(x, y, bins=60)
# Weak signal in the middle
x2 = rs.standard_normal(n)
y2 = 5 * x + 10 * rs.standard_normal(n)
img2, *_ = np.histogram2d(x2, y2, bins=60)
img = img1 + img2
hv_img = hv.Image(img).opts(active_tools=['wheel_zoom'])
auto_scale_grid = rasterize(hv_img).opts(title='Automatic color range rescaling')
fixed_scale_grid = rasterize(hv_img).opts(title='Fixed color range', clim=(img.min(), img.max()))
auto_scale_grid + fixed_scale_grid; # Output supressed and gif shown below instead
# -
# <img src="http://assets.holoviews.org/gifs/guides/user_guide/Large_Data/rasterize_color_range.gif"></img>
# ## Spreading
#
# The Datashader examples above treat points and lines as infinitesimal in width, such that a given point or small bit of line segment appears in at most one pixel. This approach ensures that the overall distribution of the points will be mathematically well founded -- each pixel will scale in value directly by the number of points that fall into it, or by the lines that cross it.
#
# However, many monitors are sufficiently high resolution that the resulting point or line can be difficult to see---a single pixel may not actually be visible on its own, and its color may likely be very difficult to make out. To compensate for this, HoloViews provides access to Datashader's image-based "spreading", which makes isolated pixels "spread" into adjacent ones for visibility. There are two varieties of spreading supported:
#
# 1. ``spread``: fixed spreading of a certain number of pixels, which is useful if you want to be sure how much spreading is done regardless of the properties of the data.
# 2. ``dynspread``: spreads up to a maximum size as long as it does not exceed a specified fraction of adjacency between pixels.
#
# Dynamic spreading is typically more useful, because it adjusts depending on how close the datapoints are to each other on screen. Both types of spreading require Datashader to do the colormapping (applying `shade`), because they operate on RGB pixels, not data arrays.
#
# You can compare the results in the two plots below after zooming in:
datashade(points) + dynspread(datashade(points))
# Both plots show the same data, and look identical when zoomed out, but when zoomed in enough you should be able to see the individual data points on the right while the ones on the left are barely visible. The dynspread parameters typically need some hand tuning, as the only purpose of such spreading is to make things visible on a particular monitor for a particular observer; the underlying mathematical operations in Datashader do not normally need parameters to be adjusted.
#
# The same operation works similarly for line segments:
datashade(paths) + dynspread(datashade(paths))
# # Multidimensional plots
#
# The above plots show two dimensions of data plotted along *x* and *y*, but Datashader operations can be used with additional dimensions as well. For instance, an extra dimension (here called `k`), can be treated as a category label and used to colorize the points or lines. Compared to a standard scatterplot that would suffer from overplotting, here the result will be merged mathematically by Datashader, completely avoiding any overplotting issues except local ones due to spreading:
# +
np.random.seed(3)
kdims=['d1','d2']
num_ks=8
def rand_gauss2d():
return 100*np.random.multivariate_normal(np.random.randn(2), random_cov(), (100000,))
gaussians = {i: hv.Points(rand_gauss2d(), kdims) for i in range(num_ks)}
lines = {i: hv.Curve(time_series(N=10000, S0=200+np.random.rand())) for i in range(num_ks)}
gaussspread = dynspread(datashade(hv.NdOverlay(gaussians, kdims='k'), aggregator=ds.count_cat('k')))
linespread = dynspread(datashade(hv.NdOverlay(lines, kdims='k'), aggregator=ds.count_cat('k')))
(gaussspread + linespread).opts(opts.RGB(width=400))
# -
# Because Bokeh only ever sees an image, providing legends and keys has to be done separately, though we are working to make this process more seamless. For now, you can show a legend by adding a suitable collection of labeled points:
# +
# definition copied here to ensure independent pan/zoom state for each dynamic plot
gaussspread = dynspread(datashade(hv.NdOverlay(gaussians, kdims=['k']), aggregator=ds.count_cat('k')))
from datashader.colors import Sets1to3 # default datashade() and shade() color cycle
color_key = list(enumerate(Sets1to3[0:num_ks]))
color_points = hv.NdOverlay({k: hv.Points([0,0], label=str(k)).opts(color=v) for k, v in color_key})
(color_points * gaussspread).opts(width=600)
# -
# Here the dummy points are at 0,0 for this dataset, but would need to be at another suitable value for data that is in a different range.
# ## Working with time series
# HoloViews also makes it possible to datashade large timeseries using the ``datashade`` and ``rasterize`` operations:
# +
import pandas as pd
dates = pd.date_range(start="2014-01-01", end="2016-01-01", freq='1D') # or '1min'
curve = hv.Curve((dates, time_series(N=len(dates), sigma = 1)))
datashade(curve, cmap=["blue"]).opts(width=800)
# -
# HoloViews also supplies some operations that are useful in combination with Datashader timeseries. For instance, you can compute a rolling mean of the results and then show a subset of outlier points, which will then support hover, selection, and other interactive Bokeh features:
# +
from holoviews.operation.timeseries import rolling, rolling_outlier_std
smoothed = rolling(curve, rolling_window=50)
outliers = rolling_outlier_std(curve, rolling_window=50, sigma=2)
ds_curve = datashade(curve, cmap=["blue"])
spread = dynspread(datashade(smoothed, cmap=["red"]),max_px=1)
(ds_curve * spread * outliers).opts(
opts.Scatter(line_color="black", fill_color="red", size=10, tools=['hover', 'box_select'], width=800))
# -
# Note that the above plot will look blocky in a static export (such as on anaconda.org), because the exported version is generated without taking the size of the actual plot (using default height and width for Datashader) into account, whereas the live notebook automatically regenerates the plot to match the visible area on the page. The result of all these operations can be laid out, overlaid, selected, and sampled just like any other HoloViews element, letting you work naturally with even very large datasets.
#
#
# # Hover info
#
# As you can see in the examples above, converting the data to an image using Datashader makes it feasible to work with even very large datasets interactively. One unfortunate side effect is that the original datapoints and line segments can no longer be used to support "tooltips" or "hover" information directly for RGB images generated with `datashade`; that data simply is not present at the browser level, and so the browser cannot unambiguously report information about any specific datapoint.
#
# If you do need hover information, there are two good options available:
#
# 1) Use the ``rasterize`` operation without `shade`, which will let the plotting code handle the conversion to colors while still having the actual aggregated data to support hovering
#
# 2) Overlay a separate layer as a ``QuadMesh`` or ``Image`` containing the hover information
# +
from holoviews.streams import RangeXY
rasterized = rasterize(points, width=400, height=400)
fixed_hover = (datashade(points, width=400, height=400) *
hv.QuadMesh(rasterize(points, width=10, height=10, dynamic=False)))
dynamic_hover = (datashade(points, width=400, height=400) *
rasterize(points, width=10, height=10, streams=[RangeXY]).apply(hv.QuadMesh))
(rasterized + fixed_hover + dynamic_hover).opts(
opts.QuadMesh(tools=['hover'], alpha=0, hover_alpha=0.2),
opts.Image(tools=['hover']))
# -
# In the above examples, the plot on the left provides hover information directly on the aggregated ``Image``. The middle plot displays hover information as a ``QuadMesh`` at a fixed spatial scale, while the one on the right reports on an area that scales with the zoom level so that arbitrarily small regions of data space can be examined, which is generally more useful (but requires a live Python server).
# # Element types supported for Datashading
#
# Fundamentally, what datashader does is to rasterize data, i.e., render a representation of it into a regularly gridded rectangular portion of a two-dimensional plane. Datashader natively supports six basic types of rasterization:
#
# - **points**: zero-dimensional objects aggregated by position alone, each point covering zero area in the plane and thus falling into exactly one grid cell of the resulting array (if the point is within the bounds being aggregated).
# - **line**: polyline/multiline objects (connected series of line segments), with each segment having a fixed length but zero width and crossing each grid cell at most once.
# - **area**: a region to fill either between the supplied y-values and the origin or between two supplied lines
# - **trimesh**: irregularly spaced triangular grid, with each triangle covering a portion of the 2D plane and thus potentially crossing multiple grid cells (thus requiring interpolation/upsampling). Depending on the zoom level, a single pixel can also include multiple triangles, which then becomes similar to the `points` case (requiring aggregation/downsampling of all triangles covered by the pixel).
# - **raster**: an axis-aligned regularly gridded two-dimensional subregion of the plane, with each grid cell in the source data covering more than one grid cell in the output grid (requiring interpolation/upsampling), or with each grid cell in the output grid including contributions from more than one grid cell in the input grid (requiring aggregation/downsampling).
# - **quadmesh**: a recti-linear or curvi-linear mesh (like a raster, but allowing nonuniform spacing and coordinate mapping) where each quad can cover one or more cells in the output (requiring upsampling, currently only as nearest neighbor), or with each output grid cell including contributions from more than one input grid cell (requiring aggregation/downsampling).
# - **polygons**: arbitrary filled shapes in 2D space (bounded by a piecewise linear set of segments), optionally punctuated by similarly bounded internal holes
#
# Datashader focuses on implementing those four cases very efficiently, and HoloViews in turn can use them to render a very large range of specific types of data:
#
# ### Supported Elements
#
# - **points**: [`hv.Nodes`](../reference/elements/bokeh/Graph.ipynb), [`hv.Points`](../reference/elements/bokeh/Points.ipynb), [`hv.Scatter`](../reference/elements/bokeh/Scatter.ipynb)
# - **line**: [`hv.Contours`](../reference/elements/bokeh/Contours.ipynb), [`hv.Curve`](../reference/elements/bokeh/Curve.ipynb), [`hv.Path`](../reference/elements/bokeh/Path.ipynb), [`hv.Graph`](../reference/elements/bokeh/Graph.ipynb), [`hv.EdgePaths`](../reference/elements/bokeh/Graph.ipynb), [`hv.Spikes`](../reference/elements/bokeh/Spikes.ipynb), [`hv.Segments`](../reference/elements/bokeh/Segments.ipynb)
# - **area**: [`hv.Area`](../reference/elements/bokeh/Area.ipynb), [`hv.Spread`](../reference/elements/bokeh/Spread.ipynb)
# - **raster**: [`hv.Image`](../reference/elements/bokeh/Image.ipynb), [`hv.HSV`](../reference/elements/bokeh/HSV.ipynb), [`hv.RGB`](../reference/elements/bokeh/RGB.ipynb)
# - **trimesh**: [`hv.TriMesh`](../reference/elements/bokeh/TriMesh.ipynb)
# - **quadmesh**: [`hv.QuadMesh`](../reference/elements/bokeh/QuadMesh.ipynb)
# - **polygons**: [`hv.Polygons`](../reference/elements/bokeh/Polygons.ipynb)
#
# Other HoloViews elements *could* be supported, but do not currently have a useful datashaded representation:
#
# ### Elements not yet supported
#
# - **line**: [`hv.Spline`](../reference/elements/bokeh/Spline.ipynb), [`hv.VectorField`](../reference/elements/bokeh/VectorField.ipynb)
# - **raster**: [`hv.HeatMap`](../reference/elements/bokeh/HeatMap.ipynb), [`hv.Raster`](../reference/elements/bokeh/Raster.ipynb)
#
# There are also other Elements that are not expected to be useful with datashader because they are isolated annotations, are already summaries or aggregations of other data, have graphical representations that are only meaningful at a certain size, or are text based:
#
# ### Not useful to support
#
# - datashadable annotations: [`hv.Arrow`](../reference/elements/bokeh/Arrow.ipynb), [`hv.Bounds`](../reference/elements/bokeh/Bounds.ipynb), [`hv.Box`](../reference/elements/bokeh/Box.ipynb), [`hv.Ellipse`](../reference/elements/bokeh/Ellipse.ipynb) (actually do work with datashade currently, but not officially supported because they are not vectorized and thus unlikely to have enough items to be worth datashading)
# - other annotations: [`hv.Arrow`](../reference/elements/bokeh/Arrow.ipynb), [`hv.HLine`](../reference/elements/bokeh/HLine.ipynb), [`hv.VLine`](../reference/elements/bokeh/VLine.ipynb), [`hv.Text`](../reference/elements/bokeh/Text.ipynb)
# - kdes: [`hv.Distribution`](../reference/elements/bokeh/Distribution.ipynb), [`hv.Bivariate`](../reference/elements/bokeh/Bivariate.ipynb)
# - categorical/symbolic: [`hv.BoxWhisker`](../reference/elements/bokeh/BoxWhisker.ipynb), [`hv.Bars`](../reference/elements/bokeh/Bars.ipynb), [`hv.ErrorBars`](../reference/elements/bokeh/ErrorBars.ipynb)
# - tables: [`hv.Table`](../reference/elements/bokeh/Table.ipynb), [`hv.ItemTable`](../reference/elements/bokeh/ItemTable.ipynb)
#
# Examples of each supported Element type:
# +
hv.output(backend='matplotlib')
from bokeh.sampledata.us_counties import data as counties
opts.defaults(
opts.Image(aspect=1, axiswise=True, xaxis='bare', yaxis='bare'),
opts.RGB(aspect=1, axiswise=True, xaxis='bare', yaxis='bare'),
opts.HSV(aspect=1, axiswise=True, xaxis='bare', yaxis='bare'),
opts.Layout(vspace=0.1, hspace=0.1, sublabel_format="", fig_size=80))
np.random.seed(12)
N=100
pts = [(10*i/N, np.sin(10*i/N)) for i in range(N)]
x = y = np.linspace(0, 5, int(np.sqrt(N)))
xs,ys = np.meshgrid(x,y)
z = np.sin(xs)*np.cos(ys)
r = 0.5*np.sin(0.1*xs**2+0.05*ys**2)+0.5
g = 0.5*np.sin(0.02*xs**2+0.2*ys**2)+0.5
b = 0.5*np.sin(0.02*xs**2+0.02*ys**2)+0.5
n=20
coords = np.linspace(-1.5,1.5,n)
X,Y = np.meshgrid(coords, coords);
Qx = np.cos(Y) - np.cos(X)
Qy = np.sin(Y) + np.sin(X)
Z = np.sqrt(X**2 + Y**2)
s = np.random.randn(100).cumsum()
e = s + np.random.randn(100)
opts2 = dict(filled=True, edge_color='z')
tri = hv.TriMesh.from_vertices(hv.Points(np.random.randn(N,3), vdims='z')).opts(**opts2)
(tri + tri.edgepaths + datashade(tri, aggregator=ds.mean('z')) + datashade(tri.edgepaths)).cols(2)
shadeable = [elemtype(pts) for elemtype in [hv.Curve, hv.Scatter]]
shadeable += [hv.Path(counties[(1, 1)], ['lons', 'lats']), hv.Points(counties[(1, 1)], ['lons', 'lats'])]
shadeable += [hv.Spikes(np.random.randn(10000))]
shadeable += [hv.Segments((np.arange(100), s, np.arange(100), e))]
shadeable += [hv.Area(np.random.randn(10000).cumsum())]
shadeable += [hv.Spread((np.arange(10000), np.random.randn(10000).cumsum(), np.random.randn(10000)*10))]
shadeable += [hv.Image((x,y,z))]
shadeable += [hv.QuadMesh((Qx,Qy,Z))]
shadeable += [hv.Graph(((np.zeros(N), np.arange(N)),))]
shadeable += [tri.edgepaths]
shadeable += [tri]
shadeable += [hv.Polygons([county for county in counties.values() if county['state'] == 'tx'], ['lons', 'lats'], ['name'])]
shadeable += [hv.operation.contours(hv.Image((x,y,z)), levels=10)]
rasterizable = [hv.RGB(np.dstack([r,g,b])), hv.HSV(np.dstack([g,b,r]))]
ds_opts = {
hv.Path: dict(aggregator='any'),
hv.Points: dict(aggregator='any'),
hv.Polygons: dict(aggregator=ds.count_cat('name'), color_key=hv.plotting.util.process_cmap('glasbey')),
hv.Segments: dict(aggregator='any')
}
hv.Layout([dynspread(datashade(e.relabel(e.__class__.name), **ds_opts.get(e.__class__, {}))) for e in shadeable] +
[ rasterize(e.relabel(e.__class__.name)) for e in rasterizable]).opts(shared_axes=False).cols(6)
# -
# Here we called `datashade()` on each Element type, letting Datashader do the full process of rasterization and shading, except that for `RGB` and `HSV` we only called `rasterize()` or else the results would have been converted into a monochrome image.
#
# For comparison, you can see the corresponding non-datashaded plots (as long as you leave N lower than 10000 unless you have a long time to wait!):
el_opts = dict(aspect=1, axiswise=True, xaxis='bare', yaxis='bare')
hv.Layout([e.relabel(e.__class__.name).opts(**el_opts) for e in shadeable + rasterizable]).cols(6)
# These two examples use Matplotlib, but if they were switched to Bokeh and you had a live server, they would support dynamic re-rendering on zoom and pan so that you could explore the full range of data available (e.g. even very large raster images, networks, paths, point clouds, or meshes).
#
#
# # Container types supported for datashading
#
# In the above examples `datashade()` was called directly on each Element, but it can also be called on Containers, in which case each Element in the Container will be datashaded separately (for all Container types other than a Layout):
# +
hv.output(dpi=80, size=100)
curves = {'+':hv.Curve(pts), '-':hv.Curve([(x, -1.0*y) for x, y in pts])}
supported = [hv.HoloMap(curves,'sign'), hv.Overlay(list(curves.values())), hv.NdOverlay(curves), hv.GridSpace(hv.NdOverlay(curves))]
hv.Layout([datashade(e.relabel(e.__class__.name)) for e in supported]).cols(4)
# -
dynspread(datashade(hv.NdLayout(curves,'sign')))
# # Optimizing performance
#
# Datashader and HoloViews have different design principles that are worth keeping in mind when using them in combination, if you want to ensure good overall performance. By design, Datashader supports only a small number of operations and datatypes, focusing only on what can be implemented very efficiently. HoloViews instead focuses on supporting the typical workflows of Python users, recognizing that the most computationally efficient choice is only going to be faster overall if it also minimizes the time users have to spend getting things working.
#
# HoloViews thus helps you get something working quickly, but once it is working and you realize that you need to do this often or that it comes up against the limits of your computing hardware, you can consider whether you can get much better performance by considering the following issues and suggestions.
#
# ### Use a Datashader-supported data structure
#
# HoloViews helpfully tries to convert whatever data you have provided into what Datashader supports, which is good for optimizing your time to an initial solution, but will not always be the fastest approach computationally. If you ensure that you store your data in a format that Datashader understands already, HoloViews can simply pass it down to Datashader without copying or transforming it:
#
# 1. For point, line, and trimesh data, Datashader supports Dask and Pandas dataframes, and so those two data sources will be fastest. Of those two, Dask Dataframes will usually be somewhat faster and also able to make use of distributed computational resources and out-of-core processing.
# 2. For rasters and quadmeshes, Datashader supports xarray objects natively, and so if your data is provided as an xarray already, plotting will be faster.
# 3. For polygons Datashader supports [spatialpandas](https://github.com/holoviz/spatialpandas) DataFrames.
#
# See the [Datashader docs](http://datashader.org) for examples of dealing with even quite large datasets (in the billions of points) on commodity hardware, including many HoloViews-based examples.
#
# ### Cache initial processing with `precompute=True`
#
# In the typical case of having datasets much larger than the plot resolution, HoloViews Datashader-based operations that work on the full dataset (`rasterize`, `aggregate`,`regrid`) are computationally expensive; the others are not (`shade`, `spread`, `dynspread`, etc.)
#
# The expensive operations are all of type `ResamplingOperation`, which has a parameter `precompute` (see `hv.help(hv.operation.datashader.rasterize)`, etc.) Precompute can be used to get faster performance in interactive usage by caching the last set of data used in plotting (*after* any transformations needed) and reusing it when it is requested again. This is particularly useful when your data is not in one of the supported data formats already and needs to be converted. `precompute` is False by default, because it requires using memory to store the cached data, but if you have enough memory, you can enable it so that repeated interactions (such as zooming and panning) will be much faster than the first one. In practice, most Datashader-plots don't need to do extensive precomputing, but enabling it for TriMesh and Polygon plots can greatly speed up interactive usage.
#
# ### Use GPU support
#
# Many elements now also support aggregation directly on a GPU-based datastructure such as a [cuDF DataFrame](https://github.com/rapidsai/cudf) or an Xarray DataArray backed by a [cupy](https://github.com/cupy/cupy) array. These data structures can be passed directly to the appropriate HoloViews elements just as you would use a Pandas or other Xarray object. For instance, a cuDF can be used on elements like `hv.Points` and `hv.Curve`, while a cupy-backed DataArray raster or quadmesh can be passed to `hv.QuadMesh` elements. When used with Datashader, the GPU implementation can result in 10-100x speedups, as well as avoiding having to transfer the data out of the GPU for plotting (sending only the final rendered plot out of the GPU's memory). To see which HoloViews elements are supported, see the [datashader performance guide](https://datashader.org/user_guide/Performance.html). As of the Datashader 0.11 release, all point, line, area, and quadmesh aggregations are supported when using a GPU backed datastructure, including raster objects like `hv.Image` if first converted to `hv.Quadmesh`.
#
# ### Project data only once
#
# If you are working with geographic data using [GeoViews](http://geoviews.org) that needs to be projected before display and/or before datashading, GeoViews will have to do this every time you update a plot, which can drown out the performance improvement you get by using Datashader. GeoViews allows you to project the entire dataset at once using `gv.operation.project`, and once you do this you should be able to use Datashader at full speed.
#
# If you follow these suggestions, the combination of HoloViews and Datashader will allow you to work uniformly with data covering a huge range of sizes. Per session or per plot, you can trade off the ability to export user-manipulable plots against file size and browser compatibility, and allowing you to render even the largest dataset faithfully. HoloViews makes the full power of Datashader available in just a few lines of code, giving you a natural way to work with your data regardless of its size.
| examples/user_guide/15-Large_Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import os
import string
dataset = pd.read_csv("train_features.csv")
type(dataset)
# +
#dataset = dataset.sample(frac=1).reset_index(drop=True)
# -
dataset.head(5)
maindir = r"C:/Users/Lenovo/Desktop/leaf identification"
ds_path = maindir + "//Leaves"
img_files = os.listdir(ds_path)
breakpoints = [1001,1059,1060,1122,1552,1616,1123,1194,1195,1267,1268,1323,1324,1385,1386,1437,1497,1551,1438,1496,2001,2050,2051,2113,2114,2165,2166,2230,2231,2290,2291,2346,2347,2423,2424,2485,2486,2546,2547,2612,2616,2675,3001,3055,3056,3110,3111,3175,3176,3229,3230,3281,3282,3334,3335,3389,3390,3446,3447,3510,3511,3563,3566,3621]
target_list = []
for file in img_files:
target_num = int(file.split(".")[0])
flag = 0
i = 0
for i in range(0,len(breakpoints),2):
if((target_num >= breakpoints[i]) and (target_num <= breakpoints[i+1])):
flag = 1
break
if(flag==1):
target = int((i/2))
target_list.append(target)
y = np.array(target_list)
y
X = dataset.iloc[:,1:]
X.head()
y[0:5]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state = 142)
X_train.head(5)
y_train[0:5]
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
X_train[0:2]
y_train[0:4]
from sklearn import svm
clf = svm.SVC()
clf.fit(X_train,y_train)
import pickle
file = open('svm_classifier.pkl' , 'wb')
pickle.dump(clf , file)
clk = pickle.load(open('svm_classifier.pkl' , 'rb') )
y_pred = clk.predict(X_test)
y_pred
from sklearn import metrics
metrics.accuracy_score(y_test, y_pred)
print(metrics.classification_report(y_test, y_pred))
from sklearn.model_selection import GridSearchCV
parameters = [{'kernel': ['rbf'],
'gamma': [1e-4, 1e-3, 0.01, 0.1, 0.2, 0.5],
'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}
]
svm_clf = GridSearchCV(svm.SVC(decision_function_shape='ovr'), parameters, cv=5)
svm_clf.fit(X_train, y_train)
svm_clf.best_params_
means = svm_clf.cv_results_['mean_test_score']
stds = svm_clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, svm_clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))
y_pred_svm = svm_clf.predict(X_test)
metrics.accuracy_score(y_test, y_pred_svm)
print(metrics.classification_report(y_test, y_pred_svm))
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(X)
var= pca.explained_variance_ratio_
var
import matplotlib.pyplot as plt
# %matplotlib inline
var1=np.cumsum(np.round(pca.explained_variance_ratio_, decimals=4)*100)
plt.plot(var1)
| leaf identification/identification final-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.0 64-bit
# language: python
# name: python3
# ---
# # The sixth chapter from <NAME>'s book
#
# ## [More onfo about loops in Python](https://realpython.com/python-while-loop/)
# # While
# basic example of using while loop
count = 1
while count <= 5:
print(count)
count += 1
a = ['foo', 'bar', 'baz']
while a:
print(a.pop(-1))
while True:
stuff = input('wpisz tekst [lub q aby zakonczyc]: ')
if stuff == 'q':
break
print(stuff.capitalize())
# break in loop
n = 5
while n > 0:
n -= 1
if n == 2:
break
print(n)
print('Loop ended.')
# while and else in loop
n = 5
while n > 0:
n -= 1
print(n)
else:
print('Loop done.')
# # If, else
n = 5
while n > 0:
n -= 1
print(n)
if n == 2:
break
else:
print('Loop done.')
# +
a = ['foo', 'bar', 'baz', 'qux']
s = 'corge'
i = 0
while i < len(a):
if a[i] == s:
# Processing for item found
break
i += 1
else:
# Processing for item not found
print(s, 'not found in list.')
# -
a = ['foo', 'bar', 'baz']
while True:
if not a:
break
print(a.pop(-1))
age = 17
gender = 'M'
if age < 18:
if gender == 'M':
print('son')
else:
print('daughter')
elif age >= 18 and age < 65:
if gender == 'M':
print('father')
else:
print('mother')
else:
if gender == 'M':
print('grandfather')
else:
print('grandmother')
a = ['foo', 'bar']
while len(a):
print(a.pop(0))
b = ['baz', 'qux']
while len(b):
print('>', b.pop(0))
n = 5
while n > 0: n -= 1; print(n)
# +
# for
# Measure some strings:
words = ['cat', 'window', 'defenestrate']
for w in words:
print(w, len(w))
# +
# for more interesting example
# Create a sample collection
users = {'Hans': 'active', 'Éléonore': 'inactive', '景太郎': 'active'}
# Strategy: Iterate over a copy
for user, status in users.copy().items():
if status == 'inactive':
del users[user]
# Strategy: Create a new collection
active_users = {}
for user, status in users.items():
if status == 'active':
active_users[user] = status
# +
# for in range
for i in range(5):
print(i)
# -
a = ['Mary', 'had', 'a', 'little', 'lamb']
for i in range(len(a)):
print(i, a[i])
# +
# break and continue
for n in range(2, 10):
for x in range(2, n):
if n % x == 0:
print(n, 'equals', x, '*', n//x)
break
else:
# loop fell through without finding a factor
print(n, 'is a prime number')
# -
for num in range(2, 10):
if num % 2 == 0:
print("Found an even number", num)
continue
print("Found an odd number", num)
def http_error(status):
match status:
case 400:
return "Bad request"
case 404:
return "Not found"
case 418:
return "I'm a teapot"
case _:
return "Something's wrong with the internet"
# +
# Python program to
# demonstrate continue
# statement
# loop from 1 to 10
for i in range(1, 11):
# If i is equals to 6,
# continue to next iteration
# without printing
if i == 6:
continue
else:
# otherwise print the value
# of i
print(i, end=" ")
# -
# else
for i in range(1, 4):
print(i)
else: # Executed because no break in for
print("No Break")
# +
for i in range(1, 4):
print(i)
break
else: # Not executed as there is a break
print("No Break")
# +
# Python 3.x program to check if an array consists
# of even number
def contains_even_number(l):
for ele in l:
if ele % 2 == 0:
print ("list contains an even number")
break
# This else executes only if break is NEVER
# reached and loop terminated after all iterations.
else:
print ("list does not contain an even number")
# Driver code
print ("For List 1:")
contains_even_number([1, 9, 8])
print (" \nFor List 2:")
contains_even_number([1, 3, 5])
# -
count = 0
while (count < 1):
count = count+1
print(count)
break
else:
print("No Break")
# # Python "for" Loops
#
# [More infor about for lop](https://realpython.com/python-for-loop/)
# Of the loop types listed above, Python only implements the last: collection-based iteration. At first blush, that may seem like a raw deal, but rest assured that Python’s implementation of definite iteration is so versatile that you won’t end up feeling cheated!
#
# Shortly, you’ll dig into the guts of Python’s for loop in detail. But for now, let’s start with a quick prototype and example, just to get acquainted.
# ## Iterating Through a Dictionary
d = {'foo': 1, 'bar': 2, 'baz': 3}
for k in d:
print(k)
# +
i, j = (1, 2)
print(i, j)
for i, j in [(1, 2), (3, 4), (5, 6)]:
print(i, j)
# -
d = {'foo': 1, 'bar': 2, 'baz': 3}
for k, v in d.items():
print('k =', k, ', v =', v)
# # The range() Function
# +
list(range(-5, 5))
list(range(5, -5))
list(range(5, -5, -1))
# -
for i in ['foo', 'bar', 'baz', 'qux']:
print(i)
else:
print('Done.') # Will execute
# # Break
for i in ['foo', 'bar', 'baz', 'qux']:
if 'b' in i:
break
print(i)
for i in ['foo', 'bar', 'baz', 'qux']:
if 'b' in i:
continue
print(i)
for i in ['foo', 'bar', 'baz', 'qux']:
print(i)
else:
print('Done.') # Will execute
for i in ['foo', 'bar', 'baz', 'qux']:
if i == 'bar':
break
print(i)
else:
print('Done.') # Will not execute
| 06. Chapter_/Loops_while_and_for.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Feature table
# +
import os
import re
import time
from cltk.corpus.utils.formatter import assemble_phi5_works_filepaths
from cltk.corpus.utils.formatter import phi5_plaintext_cleanup
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
# -
def works_texts_list():
fps = assemble_phi5_works_filepaths()
curly_comp = re.compile(r'{.+?}')
_list = []
for fp in fps:
with open(fp) as fo:
fr = fo.read()
text = phi5_plaintext_cleanup(fr, rm_punctuation=True, rm_periods=True)
text = curly_comp.sub('', text)
_list.append(text)
return _list
t0 = time.time()
text_list = works_texts_list()
print('Total texts', len(text_list))
print('Time to build list of texts: {}'.format(time.time() - t0))
# bag of words/word count
def bow_csv():
t0 = time.time()
vectorizer = CountVectorizer(min_df=1)
column_names = ['wc_' + w for w in vectorizer.get_feature_names()]
term_document_matrix = vectorizer.fit_transform(text_list)
dataframe_bow = pd.DataFrame(term_document_matrix.toarray(), columns=column_names)
print('DF BOW shape', dataframe_bow.shape)
fp = os.path.expanduser('~/cltk_data/user_data/bow_latin.csv')
dataframe_bow.to_csv(fp)
print('Time to create BOW vectorizer and write csv: {}'.format(time.time() - t0))
# tf-idf
def tfidf_csv():
t0 = time.time()
vectorizer = TfidfVectorizer(min_df=1)
column_names = ['tfidf_' + w for w in vectorizer.get_feature_names()]
term_document_matrix = vectorizer.fit_transform(text_list)
dataframe_tfidf = pd.DataFrame(term_document_matrix.toarray(), columns=column_names)
print('DF tf-idf shape', dataframe_tfidf.shape)
fp = os.path.expanduser('~/cltk_data/user_data/tfidf_latin.csv')
dataframe_tfidf.to_csv(fp)
print('Time to create tf-idf vectorizer and write csv: {}'.format(time.time() - t0))
bow_csv()
tfidf_csv()
vectorizer = TfidfVectorizer(min_df=1)
term_document_matrix = vectorizer.fit_transform(text_list)
| tf-idf/tf-idf Latin.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from BurstCube.simGenerator import configurator
conf = configurator('config.yaml')
from BurstCube.bcSim import simFiles
sfs = simFiles('config.yaml')
# print(sfs.sims[0].simFile)
# type(sfs.sims[0].simFile)# fileN = 'FIN_1000.000keV_15.00ze_0.00az.inc1.id1.sim'
area = sfs.calculateAeff()
# +
simulation = 0; detectOne = 0; detectTwo = 0; detectThree = 0; detectFour = 0
percentOne = 0; percentTwo = 0; percentThree = 0; percentFour = 0
# detectOneTot = 0
# detectTwoTot = 0
# detectThreeTot = 0
# detectFourTot = 0
detectTot = 0
triggerDict = {}
f = open("dict.txt","w")
doubTriggerDict = {}
triggerSim = []
triggerSimNum = []
triggerSimCount = []
triggerSimEnergy = []
Det1 = []
Det2 = []
Det3 = []
Det4 = []
spercentOne = 0
spercentTwo = 0
spercentThree = 0
spercentFour = 0
simType = ''
# count = 0
import numpy as np
# triggerData = np.zeros(len(self.sims)), dtype={'names': ['keV', 'ze', 'az', 'counterXX', 'counterNXX', 'counterNXY', 'counterXY'], 'formats': ['float32', 'float32', 'float32','float32', 'float32', 'float32', 'float32']}
for file in sfs.sims:
simulation = simulation + 1
# count += 1
# print(file.simFile)
import re
counterXX = 0
counterNXX = 0
counterNXY = 0
counterXY = 0
count = 0
d = 0
prevLine = 'random'
counter = 0
with open(file.simFile) as sim:
# print(file.simFile)
# triggerDict = {'trig': [0], 'd' : 'sample'}
for i, line in enumerate(sim):
if line.startswith('ID'):
ID = line.split()
for i in range(len(ID)):
trigger = ID[1]
if line.startswith('HTsim'):
data = line.split(";")
data = [w.strip(' ') for w in data]
# counter = counter + 1
data[1] = float(data[1])
data[2] = float(data[2])
# print(data[1])
# print("HELLO")
# print(data[2])
# print("GOODBYE")
# for i in range(len(data)):
# triggerDict.setdefault('triggers', [])
# triggerDict.setdefault('d', [])
if (abs(5.52500 - data[1]) <= 0.05) and (abs(5.52500 - data[2]) <= .05):
counterXX += 1
d = 1
# triggerDict['trig'].append(counterXX)
# triggerDict['d'].append(d)
elif (abs(-5.52500 - data[1]) <= 0.05) and (abs(5.52500 - data[2]) <= .05):
counterNXX += 1
d = 2
# triggerDict['trig'].append(counterNXX)
# triggerDict['d'].append(d)
elif (abs(-5.52500 - data[1]) <= 0.05) and (abs(-5.52500 - data[2]) <= .05):
counterNXY += 1
d = 3
# triggerDict['trig'].append(counterNXY)
# triggerDict['d'].append(d)
elif (abs(5.52500 - data[1]) <= 0.05) and (abs(-5.52500 - data[2]) <= .05):
counterXY += 1
d = 4
# triggerDict['trig'].append(counterXY)
# triggerDict['d'].append(d)
# detectOne = counterXX + detectOne
# detectTwo = counterNXX + detectTwo
# detectThree = counterNXY + detectThree
# detectFour = counterXY + detectFour
# print(detectOne)
# print(detectTwo)
# print(detectThree)
# print(detectFour)
# print('*****')
counter += 1
if prevLine.startswith('HTsim') and line.startswith('HTsim'):
sfo = re.split('[_ . //]', file.simFile)
# doubTriggerDict['energy'] = sfo[4]
# doubTriggerDict['zenith'] = sfo[6]
# doubTriggerDict['azimuth'] = sfo[8]
# doubTriggerDict['trigNum'] = trigger
prevLine = line
if counterXX > (counterNXX and counterNXY and counterXY):
triggerSimMax = counterXX
triggerSimType = 'counterXX'
elif counterNXX > (counterXX and counterNXY and counterXY):
triggerSimMax = counterNXX
triggerSimType = 'counterNXX'
elif counterNXY > (counterXX and counterNXX and counterXY):
triggerSimMax = counterNXY
triggerSimType = 'counterNXY'
elif counterXY > (counterXX and counterNXX and counterNXY):
triggerSimMax = counterXY
triggerSimType = 'counterXY'
count += 1
sfo = re.split('[/ _]', file.simFile)
triggerSimCount.append(triggerSimType)
triggerSimNum.append(triggerSimMax)
triggerSim.append(triggerSimMax)
triggerSim.append(triggerSimType)
# triggerSimEnergy.append(sfo[12])
spercentOne = counterXX / counter * 100
spercentTwo = counterNXX / counter * 100
spercentThree = counterNXY / counter * 100
spercentFour = counterXY / counter * 100
# print(counter, spercentOne)
Det1.append([spercentOne])
Det2.append([spercentTwo])
Det3.append([spercentThree])
Det4.append([spercentFour])
# print('***', trigger)
# vars = re.split('[_ .]', file.simFile)
# print("----------------------------------------------------------------------------")
# print('Simulation', simulation, '--', trigger, ': Energy ({} keV), Azimuth ({} degrees), Zenith ({} degrees)'.format(vars[3], vars[5], vars[7]))
# print("----------------------------------------------------------------------------")
# print('Detector 1 has', counterXX , 'hits.')
# print('Detector 2 has', counterNXX , 'hits.')
# print('Detector 3 has', counterNXY , 'hits.')
# print('Detector 4 has', counterXY , 'hits.')
# print('**', count)
detectOne = counterXX + detectOne
detectTwo = counterNXX + detectTwo
detectThree = counterNXY + detectThree
detectFour = counterXY + detectFour
detectTot = detectOne + detectTwo + detectThree + detectFour
# print(detectOne, detectTot)
percentOne = detectOne / detectTot * 100
percentTwo = detectTwo / detectTot * 100
percentThree = detectThree / detectTot * 100
percentFour = detectFour / detectTot * 100
'''Important, but not printed for saving purposes'''
# print('**', percentOne, "**", percentTwo, '**', percentThree, "**", percentFour, "**")
import operator
'''Important, but not printed for saving purposes'''
print(sorted(enumerate(triggerSimNum), key=operator.itemgetter(1)))
Energy = list(area['keV'])
'''Important, but not printed for saving purposes'''
# c = 0
# for i in range(len(triggerSimCount)):
# print((i+1), 'Index:', c, 'Value: ', triggerSimCount[i], 'Energy:', Energy[i])
# c = c + 1
Azimuth = list(area['az'])
Azimuth1 = []
for element in Azimuth:
Azimuth1.append([element])
Azimuth = Azimuth1
triggerDict['Azimuth'] = Azimuth
triggerDict['Detector1'] = Det1
triggerDict['Detector2'] = Det2
triggerDict['Detector3'] = Det3
triggerDict['Detector4'] = Det4
# +
from astropy.table import Table, Column
from astropy.io import fits
# hdulist = Table([Energy, Azimuth, Det1, Det2, Det3, Det4], names=('Energy', 'Azimuth', 'Det1', 'Det2', 'Det3', 'Det4'))
# hdulist.write('localizeTable.fits', format='fits')
# t1 = fits.open('/home/laura/Research/local_test/100_0_c/localizeTable.fits')
# t2 = fits.open('localizeTable.fits')
# hdu = fits.BinTableHDU.from_columns([fits.Column(name='Energy', format='L', array=(Energy)),fits.Column(name='Azimuth', format='D', array=(Azimuth)), fits.Column(name='Det1', format='L', array=(Det1)), fits.Column(name='Det2', format='L', array=(Det2)), fits.Column(name='Det3', format='L', array=(Det3)), fits.Column(name='Det4', format='L', array=(Det4))])
# hdu.writeto('localizeTrial.fits')
hdr = fits.Header()
hdr['HEADER'] = 'SIM 200 keV, 15 ze'
primary_hdu = fits.PrimaryHDU(header=hdr)
hdul = fits.HDUList([primary_hdu, hdu])
hdul.writeto('localizeTrial2.fits')
# print(t2)
# t = t1[1].columns + t2[1].columns
# hdu = pyfits.new_table(t)
# hdu.writeto('localizeTable.fits')
# -
f = fits.open('localizeTrial.fits')
f['Energy']
| lepalmer/localCodes/local_test/200_15_c/.ipynb_checkpoints/200_15_c (7-12-18)-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Noisy and noiseless linear regression
# +
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import dotenv
import pandas as pd
import mlflow
import plotly
import plotly.graph_objects as go
import plotly.express as px
import plotly.subplots
import plotly.io as pio
import typing
import os
import shutil
import sys
# -
EXPORT = False
SHOW_TITLES = not EXPORT
EXPORT_DIR_NAME = 'teaser_plot_linreg'
LINREG_EXPERIMENT_NAME = 'teaser_plot_linreg'
# +
# Load environment variables
dotenv.load_dotenv()
# Enable loading of the project module
MODULE_DIR = os.path.join(os.path.abspath(os.path.join(os.path.curdir, os.pardir)), 'src')
sys.path.append(MODULE_DIR)
# -
# %load_ext autoreload
# %autoreload 2
import interpolation_robustness as ir
# +
if EXPORT:
EXPORT_DIR = os.path.join(ir.util.REPO_ROOT_DIR, 'logs', f'export_{EXPORT_DIR_NAME}')
print('Using export directory', EXPORT_DIR)
if os.path.exists(EXPORT_DIR):
shutil.rmtree(EXPORT_DIR)
os.makedirs(EXPORT_DIR)
def export_fig(fig: plt.Figure, filename: str):
# If export is disabled then do nothing
if EXPORT:
export_path = os.path.join(EXPORT_DIR, filename)
fig.savefig(export_path)
print('Exported figure at', export_path)
# -
FIGURE_SIZE = (2.75, 1.45)
LEGEND_FIGURE_SIZE = (3.2, 0.5)
LEGEND_FONT_SIZE = ir.plots.FONT_SIZE_SMALL_PT
AXIS_LABEL_FONT_SIZE = ir.plots.FONT_SIZE_SMALL_PT
ir.plots.setup_matplotlib(show_titles=SHOW_TITLES)
robust_color_idx = 1
std_color_idx = 0
robust_linestyle_idx = 0
std_linestyle_idx = 2
# +
client = mlflow.tracking.MlflowClient()
linreg_experiment = client.get_experiment_by_name(LINREG_EXPERIMENT_NAME)
linreg_runs = mlflow.search_runs(
linreg_experiment.experiment_id
)
linreg_runs = linreg_runs.set_index('run_id', drop=False) # set index, but keep column to not break stuff depending on it
# Convert some parameters to numbers and sort accordingly
linreg_runs['params.data_dim'] = linreg_runs['params.data_dim'].astype(int)
linreg_runs['params.test_attack_epsilon'] = linreg_runs['params.test_attack_epsilon'].astype(np.float)
linreg_runs['params.l2_lambda'] = linreg_runs['params.l2_lambda'].astype(np.float)
linreg_runs['params.data_gaussian_noise_variance'] = linreg_runs['params.data_gaussian_noise_variance'].astype(np.float)
linreg_runs = linreg_runs.sort_values(['params.data_dim', 'params.l2_lambda', 'params.data_gaussian_noise_variance'], ascending=True)
print('Loaded', len(linreg_runs), 'runs of experiment', LINREG_EXPERIMENT_NAME)
assert linreg_runs['status'].eq('FINISHED').all()
# +
num_samples, = linreg_runs['params.num_train_samples'].astype(int).unique()
nonoise, covariate_noise = np.sort(linreg_runs['params.data_gaussian_noise_variance'].unique())
assert nonoise == 0 and covariate_noise > 0
linreg_noiseless_runs = linreg_runs[linreg_runs['params.data_gaussian_noise_variance'] == 0]
linreg_noise_runs = linreg_runs[linreg_runs['params.data_gaussian_noise_variance'] > 0]
linreg_noreg_noiseless_runs = linreg_noiseless_runs[linreg_noiseless_runs['params.l2_lambda'] == 0]
linreg_bestreg_noiseless_runs = linreg_noiseless_runs.groupby('params.data_dim').aggregate({'metrics.true_std_risk': 'min', 'metrics.true_robust_risk': 'min'})
linreg_noreg_noise_runs = linreg_noise_runs[linreg_noise_runs['params.l2_lambda'] == 0]
linreg_bestreg_noise_runs = linreg_noise_runs.groupby('params.data_dim').aggregate({'metrics.true_std_risk': 'min', 'metrics.true_robust_risk': 'min'})
linreg_noiseless_std_gaps = linreg_noreg_noiseless_runs['metrics.true_std_risk'].values - linreg_bestreg_noiseless_runs['metrics.true_std_risk'].values
linreg_noiseless_robust_gaps = linreg_noreg_noiseless_runs['metrics.true_robust_risk'].values - linreg_bestreg_noiseless_runs['metrics.true_robust_risk'].values
linreg_noise_std_gaps = linreg_noreg_noise_runs['metrics.true_std_risk'].values - linreg_bestreg_noise_runs['metrics.true_std_risk'].values
linreg_noise_robust_gaps = linreg_noreg_noise_runs['metrics.true_robust_risk'].values - linreg_bestreg_noise_runs['metrics.true_robust_risk'].values
# -
xticks = (0, 2, 4, 6, 8, 10)
y_axis_label = r'Risk($\lambda \to 0$) - Risk($\lambda_{\textnormal{opt}}$)'
# +
fig, ax = plt.subplots(figsize=FIGURE_SIZE)
x_values = linreg_noreg_noiseless_runs[f'params.data_dim'].unique() / float(num_samples)
ax.plot(
x_values,
linreg_noiseless_std_gaps,
label=r'Standard risk',
ls=ir.plots.LINESTYLE_MAP[std_linestyle_idx],
marker=ir.plots.MARKER_MAP[std_color_idx],
c=f'C{std_color_idx}'
)
ax.plot(
x_values,
linreg_noiseless_robust_gaps,
label=r'Robust risk',
ls=ir.plots.LINESTYLE_MAP[robust_linestyle_idx],
marker=ir.plots.MARKER_MAP[robust_color_idx],
c=f'C{robust_color_idx}'
)
ax.set_xlabel('d/n', fontsize=AXIS_LABEL_FONT_SIZE)
ax.set_ylabel(y_axis_label, fontsize=AXIS_LABEL_FONT_SIZE, y=0.4)
ax.set_xticks(xticks)
ax.set_ylim(bottom=-0.0025, top=0.15)
ax.set_xlim(left=0)
if SHOW_TITLES:
fig.suptitle('Linear regression, noiseless')
export_fig(fig, f'teaser_plot_linreg_noiseless.pdf')
plt.show()
# +
fig, ax = plt.subplots(figsize=FIGURE_SIZE)
x_values = linreg_noreg_noise_runs[f'params.data_dim'].unique() / float(num_samples)
ax.plot(
x_values,
linreg_noise_std_gaps,
label=r'Standard risk',
ls=ir.plots.LINESTYLE_MAP[std_linestyle_idx],
marker=ir.plots.MARKER_MAP[std_color_idx],
c=f'C{std_color_idx}'
)
ax.plot(
x_values,
linreg_noise_robust_gaps,
label=r'Robust risk',
ls=ir.plots.LINESTYLE_MAP[robust_linestyle_idx],
marker=ir.plots.MARKER_MAP[robust_color_idx],
c=f'C{robust_color_idx}'
)
ax.set_xlabel('d/n', fontsize=AXIS_LABEL_FONT_SIZE)
ax.set_ylabel(y_axis_label, fontsize=AXIS_LABEL_FONT_SIZE, y=0.4)
ax.set_xticks(xticks)
ax.set_ylim(bottom=-0.005, top=0.5)
ax.set_xlim(left=0)
if SHOW_TITLES:
fig.suptitle('Linear regression, noisy')
export_fig(fig, f'teaser_plot_linreg_noisy.pdf')
plt.show()
# +
# Legend
legend_fig = plt.figure(figsize=LEGEND_FIGURE_SIZE)
handles, labels = ax.get_legend_handles_labels()
legend_fig.legend(
handles,
labels,
loc='center',
ncol=2,
frameon=True,
fontsize=LEGEND_FONT_SIZE,
borderpad=0.5
)
export_fig(legend_fig, f'teaser_plot_linreg_legend.pdf')
| teaser_plots/teaser_plot_linreg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Quadrat Based Statistical Method for Planar Point Patterns
#
# **Authors: <NAME> <<EMAIL>>, <NAME> <<EMAIL>> and <NAME> <<EMAIL>>**
#
# ## Introduction
#
# In this notebook, we are going to introduce how to apply quadrat statistics to a point pattern to infer whether it comes from a CSR process.
#
# 1. In [Quadrat Statistic](#Quadrat-Statistic) we introduce the concept of quadrat based method.
# 2. We illustrate how to use the module **quadrat_statistics.py** through an example dataset **juvenile** in [Juvenile Example](#Juvenile-Example)
# ## Quadrat Statistic
#
# In the previous notebooks, we introduced the concept of Complete Spatial Randomness (CSR) process which serves as the benchmark process. Utilizing CSR properties, we can discriminate those that are not from a CSR process. Quadrat statistic is one such method. Since a CSR process has two major characteristics:
# 1. Uniform: each location has equal probability of getting a point (where an event happens).
# 2. Independent: location of event points are independent.
#
# We can imagine that for any point pattern, if the underlying process is a CSR process, the expected point counts inside any cell of area $|A|$ should be $\lambda |A|$ ($\lambda$ is the intensity which is uniform across the study area for a CSR). Thus, if we impose a $m \times k$ rectangular tessellation over the study area (window), we can easily calculate the expected number of points inside each cell under the null of CSR. By comparing the observed point counts against the expected counts and calculate a $\chi^2$ test statistic, we can decide whether to reject the null based on the position of the $\chi^2$ test statistic in the sampling distribution.
#
# $$\chi^2 = \sum^m_{i=1} \sum^k_{j=1} \frac{[x_{i,j}-E(x_{i,j})]^2}{\lambda |A_{i,j}|}$$
#
# There are two ways to construct the sampling distribution and acquire a p-value:
# 1. Analytical sampling distribution: a $\chi^2$ distribution of $m \times k -1$ degree of freedom. We can refer to the $\chi^2$ distribution table to acquire the p-value. If it is smaller than $0.05$, we will reject the null at the $95\%$ confidence level.
# 2. Empirical sampling distribution: a distribution constructed from a large number of $\chi^2$ test statistics for simulations under the null of CSR. If the $\chi^2$ test statistic for the observed point pattern is among the largest $5%$ test statistics, we would say that it is very unlikely that it is the outcome of a CSR process at the $95\%$ confidence level. Then, the null is rejected. A pseudo p-value can be calculated based on which we can use the same rule as p-value to make the decision:
# $$p(\chi^2) = \frac{1+\sum^{nsim}_{i=1}\phi_i}{nsim+1}$$
# where
# $$
# \phi_i =
# \begin{cases}
# 1 & \quad \text{if } \psi_i^2 \geq \chi^2 \\
# 0 & \quad \text{otherwise } \\
# \end{cases}
# $$
#
# $nsim$ is the number of simulations, $\psi_i^2$ is the $\chi^2$ test statistic calculated for each simulated point pattern, $\chi^2$ is the $\chi^2$ test statistic calculated for the observed point pattern, $\phi_i$ is an indicator variable.
#
# We are going to introduce how to use the **quadrat_statistics.py** module to perform quadrat based method using either of the above two approaches to constructing the sampling distribution and acquire a p-value.
#
# ## Juvenile Example
import libpysal as ps
import numpy as np
from pointpats import PointPattern, as_window
from pointpats import PoissonPointProcess as csr
# %matplotlib inline
import matplotlib.pyplot as plt
# Import the quadrat_statistics module to conduct quadrat-based method.
#
# Among the three major classes in the module, **RectangleM, HexagonM, QStatistic**, the first two are aimed at imposing a tessellation (rectangular or hexagonal shape) over the minimum bounding rectangle of the point pattern and calculate the number of points falling in each cell; **QStatistic** is the main class with which we can calculate a p-value, as well as a pseudo p-value to help us make the decision of rejecting the null or not.
import pointpats.quadrat_statistics as qs
dir(qs)
# Open the point shapefile "juvenile.shp".
juv = ps.io.open(ps.examples.get_path("juvenile.shp"))
len(juv) # 168 point events in total
juv_points = np.array([event for event in juv]) # get x,y coordinates for all the points
juv_points
# Construct a point pattern from numpy array **juv_points**.
pp_juv = PointPattern(juv_points)
pp_juv
pp_juv.summary()
pp_juv.plot(window= True, title= "Point pattern")
# ### Rectangle quadrats & analytical sampling distribution
#
# We can impose rectangle tessellation over mbb of the point pattern by specifying **shape** as "rectangle". We can also specify the number of rectangles in each row and column. For the current analysis, we use the $3 \times 3$ rectangle grids.
q_r = qs.QStatistic(pp_juv,shape= "rectangle",nx = 3, ny = 3)
# Use the plot method to plot the quadrats as well as the number of points falling in each quadrat.
q_r.plot()
q_r.chi2 #chi-squared test statistic for the observed point pattern
q_r.df #degree of freedom
q_r.chi2_pvalue # analytical pvalue
# Since the p-value based on the analytical $\chi^2$ distribution (degree of freedom = 8) is 0.0000589, much smaller than 0.05. We might determine that the underlying process is not CSR. We can also turn to empirical sampling distribution to ascertain our decision.
# ### Rectangle quadrats & empirical sampling distribution
#
# To construct a empirical sampling distribution, we need to simulate CSR within the window of the observed point pattern a lot of times. Here, we generate 999 point patterns under the null of CSR.
csr_process = csr(pp_juv.window, pp_juv.n, 999, asPP=True)
# We specify parameter **realizations** as the point process instance which contains 999 CSR realizations.
q_r_e = qs.QStatistic(pp_juv,shape= "rectangle",nx = 3, ny = 3, realizations = csr_process)
q_r_e.chi2_r_pvalue
# The pseudo p-value is 0.002, which is smaller than 0.05. Thus, we reject the null at the $95\%$ confidence level.
# ### Hexagon quadrats & analytical sampling distribution
#
# We can also impose hexagon tessellation over mbb of the point pattern by specifying **shape** as "hexagon". We can also specify the length of the hexagon edge. For the current analysis, we specify it as 15.
q_h = qs.QStatistic(pp_juv,shape= "hexagon",lh = 15)
q_h.plot()
q_h.chi2 #chi-squared test statistic for the observed point pattern
q_h.df #degree of freedom
q_h.chi2_pvalue # analytical pvalue
# Similar to the inference of rectangle tessellation, since the analytical p-value is much smaller than 0.05, we reject the null of CSR. The point pattern is not random.
# ### Hexagon quadrats & empirical sampling distribution
q_h_e = qs.QStatistic(pp_juv,shape= "hexagon",lh = 15, realizations = csr_process)
q_h_e.chi2_r_pvalue
# Because 0.001 is smaller than 0.05, we reject the null.
| notebooks/Quadrat_statistics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Evaluate the Annotations of Mismatching Translation Pairs
# This is an evaluation script for the annotated mismatching translation pairs in the annotation experiment.
#
# Recall the numbers:
# - coupling: $\Gamma^{20000}_{\langle 1740, 1770 \rangle}$ (14031 pairs, 5835 mismatches)
# - 2500 pairs in total (2x1500 pairs annotated, overlap: 500)
# - classes:
# - O match (orthographic/morphological difference)
# - S match (semantically similar)
# - R mismatch (semantically related)
# - A mismatch (antonymy)
# - N mismatch (noise)
# - X mismatch (wildcard)
#
# To do:
# - compute inter-annotator agreement (Kohen's Kappa) _(0.44, 'moderate')_
# - compute relative frequency of matches, mismatches _(matches/overlap: 0.722 (before) vs 0.728 (after annotation))_
# - compute estimated quality of $\Gamma$ -- how much does it change compared match/mismatch statistics? _(not much)_
# - make histogram of classes by score _(done)_
# - make boxplot of classes _(done)_
# - retrieve all pairs with labels O, S, R, and A
# +
""" imports """
import utils
import default
import csv
import numpy as np
from pandas import DataFrame
import pandas as pd
from pprint import pprint
from sklearn.metrics import cohen_kappa_score
import matplotlib.pyplot as plt
# %matplotlib inline
LABELS = ["O", "S", "R", "A", "N", "X"]
# +
def read_annot_file(filepath):
labels = []
words1 = []
words2 = []
scores = []
pairs = []
with open(filepath, "r") as f:
for line in f:
line = line.rstrip()
if len(line) > 0:
values = line.split()
labels.append(values[0][1].upper()) # only the label, not the brackets
words1.append(values[1])
words2.append(values[2])
pairs.append((values[1], values[2]))
scores.append(float(values[3]))
df = DataFrame({"label":labels, "word1":words1, "word2":words2, "pair":pairs, "score":scores})
return df
def iaa(df1, df2, selection, label_name, selection_name):
"""
this requires two dataframes, another dataframe to restrict the selection of labels (or None),
and the column names of the labels and of the column to select by.
"""
if selection is None:
labels1 = df1[label_name].tolist()
labels2 = df2[label_name].tolist()
else:
labels1 = df1[df1[selection_name].isin(selection)][label_name].tolist()
labels2 = df2[df2[selection_name].isin(selection)][label_name].tolist()
return cohen_kappa_score(labels1, labels2)
# -
# #### Read the annotated files and compute the inter-annotator agreement. Then count the label frequencies
# +
overlap_file = "annotation/pairs/overlap.txt"
annot1_file = "annotation/annotated/annotations_1.txt"
annot2_file = "annotation/annotated/annotations_2.txt"
# read in all the data
overlap_pairs = set(read_annot_file(overlap_file)["pair"].tolist())
annot1 = read_annot_file(annot1_file)
annot2 = read_annot_file(annot2_file)
# compute inter-annotator agreement
agreement = iaa(annot1, annot2, overlap_pairs, "label", "pair")
print("agreement:",agreement)
# 0.45 is 'moderate agreement'
# +
annot = pd.concat([annot1, annot2.loc[500:]]) # this does not have the indices right; you'll have to use .iloc[] instead of .loc[]
annot_ranked = annot.sort_values("score", ascending=False, ignore_index=True)
annot_ranked
# +
labels_O = annot[annot["label"]=="O"]
labels_S = annot[annot["label"]=="S"]
labels_R = annot[annot["label"]=="R"]
labels_A = annot[annot["label"]=="A"]
labels_N = annot[annot["label"]=="N"]
labels_X = annot[annot["label"]=="X"]
others = annot[annot["label"].isin(LABELS) == False]
for frame, label in zip([labels_O, labels_S, labels_R, labels_A, labels_N, labels_X], LABELS):
print(f"label: {label} frequency: {len(frame):>4} {100*len(frame)/len(annot):>5.2f}%")
print(f"\nmislabelled:\n{others}")
# -
# #### Pairs are distributed unevenly across coupling scores. Instead of plotting by score, plot by score rank instead.
plt.hist([labels_O["score"].tolist(),
labels_S["score"].tolist(),
labels_R["score"].tolist(),
labels_A["score"].tolist(),
labels_N["score"].tolist(),
labels_X["score"].tolist()],
label = LABELS,
histtype = 'bar'
)
plt.legend()
plt.grid()
plt.xlabel("coupling score x1e-3")
plt.ylabel("occurrences")
# +
rlabels_O = annot_ranked[annot_ranked["label"]=="O"]
rlabels_S = annot_ranked[annot_ranked["label"]=="S"]
rlabels_R = annot_ranked[annot_ranked["label"]=="R"]
rlabels_A = annot_ranked[annot_ranked["label"]=="A"]
rlabels_N = annot_ranked[annot_ranked["label"]=="N"]
rlabels_X = annot_ranked[annot_ranked["label"]=="X"]
rothers = annot_ranked[annot_ranked["label"].isin(LABELS) == False]
plt.hist([rlabels_O.index.tolist(),
rlabels_S.index.tolist(),
rlabels_R.index.tolist(),
rlabels_A.index.tolist(),
rlabels_N.index.tolist(),
rlabels_X.index.tolist()],
label = LABELS,
histtype = 'bar'
)
plt.legend()
plt.xlabel("rank by coupling score")
plt.ylabel("occurrences")
# +
""" Exclude 'R' and 'N' to let the other labels show more nicely """
size_factor = 5
rows = 1
columns = 1
fig, axes = plt.subplots(rows, columns, figsize=(1.5*size_factor*columns, size_factor*rows))
axes.hist([rlabels_O.index.tolist(),
rlabels_S.index.tolist(),
rlabels_A.index.tolist(),
rlabels_X.index.tolist()],
label = ["O", "S", "A", "X"],
rwidth = 0.8,
histtype = 'bar')
axes.legend()
axes.grid(which='both')
plt.xticks(range(0,2750,250), [str(x) for x in range(0,2750,250)])
axes.set_xlabel("rank by coupling score")
axes.set_ylabel("occurrences")
#plt.savefig("visuals/annotation_labels-hist.png", dpi=250)
# +
size_factor = 5
rows = 1
columns = 1
fig, axes = plt.subplots(rows, columns, figsize=(1.5*size_factor*columns, size_factor*rows))
axes.boxplot([rlabels_O.index.tolist(),
rlabels_S.index.tolist(),
rlabels_R.index.tolist(),
rlabels_A.index.tolist(),
rlabels_N.index.tolist(),
rlabels_X.index.tolist()],
labels = LABELS)
axes.grid(which='both')
axes.set_xlabel("label")
axes.set_ylabel("rank by coupling score")
#plt.savefig("visuals/annotation_label-ranks-boxplot.png", dpi=250)
# -
# #### Improvement of coupling performance
#
# - label frequencies: O:27, S:44, R:334, A:43, N:2032, X:17, others:3
# - 71 valuable (O, S)
# - 2426 noise (R, A, N, X, others)
#
# from the coupling:
# - 14031 pairs
# - 11357 vocabulary overlap
# - 8196 matches (5835 mismatches)
# - matches/overlap: 0.722
# - matches/pairs: 0.584
#
# corrected:
# - 8267 matches (5764 mismatches)
# - matches/overlap: 0.728
# - matches/pairs: 0.589
# #### Write annotated pairs to a file (only those with the labels 'O' and 'S')
woi = {}
for frame in [labels_O, labels_S]:
for p, s in zip(frame["pair"].tolist(), frame["score"].tolist()):
woi[p] = s
filepath = "words_of_interest/1740_annotated_pairs.txt"
with open(filepath, "w") as f:
for pair in sorted(woi, key=woi.get, reverse=True):
#f.write(f"{pair[0]:<15} {pair[1]:<15} {woi[pair]:>5.3f}\n")
| eval_annotations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MAT281 - Tarea 2
# **Indicaciones**:
#
# * En los **Ejercicio 1-8** puedes utilizar tanto `matplotlib` como `altair` según te parezca más conveniente o cómodo, en ambos casos cada gráfico debe tener elementos mínimos como:
# - Título
# - Nombre de los ejes, leyendas, etc. en formato _amigable_/_humano_, por ejemplo, si la columna del dataframe en cuestión tiene por nombre `casos_confirmados` se espera que el eje del gráfico tenga por nombre `Casos confirmados`.
# - Colores adecuados al tipo de datos.
# - Un tamaño adecuado para ver con facilidad en una pantalla con resolución HD o FullHD.
# - Cada vez que no se cumplan alguna de estos requerimientos se descontará __1 punto__ de la nota final.
#
# * Para el **Ejercicio 9** es obligación utilizar `altair`.
# * Cada ejercicio debe estar acompañado con una celda con comentarios o análisis que puedas desprender de los gráficos.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import altair as alt
import ipywidgets as widgets
from datetime import date
from ipywidgets import interactive, interact
pd.set_option('display.max_columns', 999)
alt.data_transformers.disable_max_rows()
alt.themes.enable('opaque')
# %matplotlib inline
# -
# **COVID-19 en Chile**
#
# En esta tarea exploraremos los datos de Covid-19 en Chile a profundidad. Las siguientes celdas cargarán los datos a utilizar en tu sesión. Es importante que leas la documentación de cada conjunto de datos para comprender las columnas.
start_date = pd.to_datetime("2020-04-13")
# +
# https://github.com/MinCiencia/Datos-COVID19/tree/master/output/producto6
confirmados = (
pd.read_csv("https://raw.githubusercontent.com/MinCiencia/Datos-COVID19/master/output/producto6/bulk/data.csv")
.rename(columns=lambda x: x.lower().replace(" ", "_"))
.assign(fecha=lambda x: pd.to_datetime(x["fecha"]))
.loc[lambda x: x["fecha"] >= start_date]
.dropna()
.astype({"casos_confirmados": np.float, "tasa": np.float})
)
confirmados.head()
# +
# https://github.com/MinCiencia/Datos-COVID19/tree/master/output/producto19
activos = (
pd.read_csv("https://raw.githubusercontent.com/MinCiencia/Datos-COVID19/master/output/producto19/CasosActivosPorComuna.csv")
.rename(columns=lambda x: x.lower().replace(" ", "_"))
.loc[lambda x: x["codigo_comuna"].notnull()]
.melt(id_vars=["region", "codigo_region", "comuna", "codigo_comuna", "poblacion"], var_name="fecha", value_name="casos_activos")
.assign(fecha=lambda x: pd.to_datetime(x["fecha"]))
.loc[lambda x: x["fecha"] >= start_date]
)
activos.head()
# +
# https://github.com/MinCiencia/Datos-COVID19/tree/master/output/producto14
fallecidos = (
pd.read_csv("https://raw.githubusercontent.com/MinCiencia/Datos-COVID19/master/output/producto14/FallecidosCumulativo.csv")
.rename(columns=lambda x: x.lower().replace(" ", "_"))
.melt(id_vars=["region"], var_name="fecha", value_name="fallecidos")
.assign(
fecha=lambda x: pd.to_datetime(x["fecha"]),
)
.loc[lambda x: x["fecha"] >= start_date]
)
fallecidos.head()
# +
# https://github.com/MinCiencia/Datos-COVID19/tree/master/output/producto10
fallecidos_etareo = (
pd.read_csv("https://raw.githubusercontent.com/MinCiencia/Datos-COVID19/master/output/producto10/FallecidosEtario.csv")
.rename(columns=lambda x: x.lower().replace(" ", "_"))
.melt(id_vars=["grupo_de_edad"], var_name="fecha", value_name="fallecidos")
.assign(
fecha=lambda x: pd.to_datetime(x["fecha"]),
grupo_de_edad=lambda x: x["grupo_de_edad"].str.replace("<=39", "0-39")
)
.loc[lambda x: x["fecha"] >= start_date]
)
fallecidos_etareo.head()
# -
# ## Ejercicio 1
#
# (10 puntos)
#
# Mostrar cantidad de fallecidos la fecha por cada grupo etáreo.
alt.Chart(fallecidos_etareo.iloc[-7:,:]).mark_bar().encode(
x = alt.X('grupo_de_edad:O', title = 'Grupo etáreo'),
y = alt.Y('fallecidos:Q', title = 'Fallecidos'),
).properties(
width=600,
height=400,
title = 'Fallecidos a la fecha por cada grupo etáreo'
)
# **Comentarios:** A la fecha el rango etario con mas fallecidos es 70-79, mientras que el rango etario con menos fallecidos es 0-39
# ## Ejercicio 2
#
# (10 puntos)
#
# ¿Qué tan variable es la población de las comunas de Chile? Considera utilizar un gráfico que resuma de buena forma la información sin agregar la variable de región o provincia.
alt.Chart(activos).mark_bar().encode(
x = alt.X('comuna', title = 'Comuna', sort= '-y'),
y = alt.Y('poblacion', title = 'Población'),
).properties(title = 'Población por comuna'
)
# **Comentarios:** A simple vista podemos notar que la mayoria de las comunas tienen menos de 100.000 habitantes, por otro lado existen una cantidad considerable de comunas entre los 200.000 y los 300.000 habitantes. Pocas comunas superan los 300.000 habitantes, pero las que lo hacen llegan a mas de 400.000, 500.000 y 600.000 habitantes. En conclucion existe una gran variabilidad en la poblacion de las comunas
# ## Ejercicio 3
#
# (10 puntos)
#
# Mostrar evolución y comparación de los fallecimientos entre distintos grupos etáreos, pero que al mismo tiempo sea fácil identificar la cantidad de fallecidos total en cada fecha.
alt.Chart(fallecidos_etareo).mark_line().encode(
x=alt.X('fecha',title = 'Fecha'),
y=alt.Y('fallecidos',title = 'Numero de fallecidos'),
color='grupo_de_edad'
).properties(title = 'Numero de fallecidos acumulados por fecha y por grupo etario'
)
# **Comentarios:** Notemos que desde un principio el grupo etario con mas fallecidos acumulados ha sido el rango 70-79, mientras que el grupo etario con menos fallecidos acumulados siempre ha sido 0-39. Podemos notar que en julio la velocidad de fallecimiento aumento y luego disminuyo en agosto.
# ## Ejercicio 4
#
# (10 puntos)
#
# Mostrar en tres gráficos la evolución de casos confirmados, evolución de fallecimientos y evolución de casos activos.
# +
confirm=alt.Chart(confirmados).mark_line().encode(
x=alt.X('fecha',title="Fecha"),
y=alt.Y('sum(casos_confirmados)',title="Total de Casos confirmados")
).properties(title = 'Total de casos confirmados por fecha'
)
fallec= alt.Chart(fallecidos_etareo).mark_line().encode(
x=alt.X('fecha',title="Fecha"),
y=alt.Y('sum(fallecidos)',title="Total de Fallecidos")
).properties(title = 'Total de Fallecidos por fecha'
)
activ= alt.Chart(activos).mark_line().encode(
x=alt.X('fecha',title="Fecha"),
y=alt.Y('sum(casos_activos)',title="Total de Casos Activos")
).properties(title = 'Total de casos activos por fecha'
)
confirm & fallec & activ
# -
# **Comentarios:** Notemos que entre junio y julio hubo un aumento de casos activos diarios, y se vio reflejado en un aumento en la pendiente de la cantidad de contagiados y un aumento en la pendiente de la cantidad de fallecidos
# ## Ejercicio 5
#
# (10 puntos)
#
# Comparar la tasa de incidencia entre las regiones a lo largo del tiempo.
tasa_reg = (confirmados[["region","fecha","poblacion","casos_confirmados"]]
.groupby(["region","fecha"]).agg('sum').reset_index()
)
tasa_reg=tasa_reg.assign(tasa=100000*tasa_reg["casos_confirmados"]/tasa_reg["poblacion"])
tasa_reg
alt.Chart(tasa_reg).mark_line().encode(
x=alt.X('fecha',title="Fecha"),
y=alt.Y('tasa',title="Tasa de incidencia"),
color=alt.Color('region',title="Región"),
tooltip = [alt.Tooltip('region', title = 'Region')]
).properties(
width=600,
height=400,
title = 'Tasa de incidencia por región'
)
# **Comentarios:** Notamos que la tasa de incidencia en un principio fue mas alta en la region de Magallanes. Desde mediados de mayo la region metropolitana tuvo la tasa de incidencia mas alta hasta mediados de Septiembre, luego la region de Magallanes volvio a tener la tasa de incidencia mas alta (hasta la actualidad). Por otro lado notamos que Arica y Parinacota ha estado desde Junio subiendo poco a poco su tasa de incidencia. Actualmente es la segunda region con tasa de incidencia mas alta.
# ## Ejercicio 6
#
# (10 puntos)
#
# ¿Hay alguna conclusión que puedas obtener rápidamente al graficar un _scatter plot_ con los casos confirmados y tasa de incidencia de cada comuna para los días 13 de abril y 6 de noviembre del 2020? Además, colorea cada punto según la región a la que pertenece y considera si es útil en el gráfico que el tamaño sea proporcional a la población.
# +
fecha_1=pd.to_datetime("2020-04-13")
fecha_2 = pd.to_datetime("2020-11-06")
alt.Chart(confirmados.loc[lambda x: (x["fecha"]==fecha_1) | (x["fecha"]==fecha_2)]).mark_circle().encode(
x = alt.X('casos_confirmados', title = 'Casos confirmados'),
y = alt.Y('tasa', title = 'Tasa'),
color = alt.Color('region', title = 'Región'),
size=alt.Size("poblacion"),
tooltip = [alt.Tooltip('fecha', title = 'Fecha')]
).properties(
width=600,
height=400,
title = 'Tasa vs Casos confirmados '
)
# -
# **Comentarios:** Notamos que para el 13 de abril los datos de todas las regiones parecen tener el mismo comportamiento (pocos casos confirmados, tasa de incidencia baja). Pero para el 6 de noviembre los datos regionales están dispersados y los comportamientos son muy variables (pocos casos confirmados y alta tasa, pocos casos confirmados y baja tasa, muchos casos confirmados y una tasa media o alta). El tamaño es util, pues no es lo mismo una tasa alta en una comuna de 100.000 habitantes a una tasa alta en una comuna de 600.000 habitantes
# ## Ejercicio 7
#
# (10 puntos)
#
# 1. Grafica la evolución de los casos activos de cada comuna en un solo gráfico.
# 2. Grafica la evolución de los casos activos de cada comuna en gráficos separados por región.
#
# Entrega los pros y contras de cada uno de estos enfoques.
alt.Chart(activos).mark_line().encode(
x=alt.X('fecha',title="Fecha"),
y=alt.Y('casos_activos',title="Casos Activos"),
color=alt.Color('comuna',title="Comuna")
).properties(
width=600,
height=400,
title = 'Evolución de casos activos por comuna'
)
# +
graficoscomuna=[]
for x in activos["region"].unique():
graficoscomuna.append(alt.Chart(activos.loc[lambda df: df["region"]==x]).mark_line().encode(
x=alt.X('fecha',title="Fecha"),
y=alt.Y('casos_activos',title="Casos Activos"),
color=alt.Color('comuna',title="Comuna")
).properties(
width=600,
height=400,
title = f'Evolucion de casos activos en {x}'
))
alt.vconcat(*graficoscomuna).resolve_scale(color='independent')
# -
# **Comentarios:**
#
# Primer grafico:
# Contras:
#
# - No se alcanza a ver todas las comunas en la legenda del grafico.
#
# - Al ser tantas comunas no se puede visualizar de manera correcta.
#
# Pros:
#
# - De manera general se ven regiones con un comportamiento similar y se alcanzan a notar los comportamientos atipicos
#
# Segundo grafico:
# Pros:
#
# -Se visualizan los nombres de todas las comunas de la region
# -Al ser menos lineas podemos notar el comportamiento claramente en la mayoria de las regiones
#
# Contras:
#
# - Para regiones con muchas comunas volvemos al mismo problema anterior, no es posible la visualizacion clara de las lineas
# ## Ejercicio 8
#
# (10 puntos)
#
# Hacer un gráfico que permita comparar rápidamente entre regiones su promedio de casos activos , máximo de casos confirmados y fallecidos. Utiliza los valores reales y apoyarlos con colores.
#
# Se adjunta el diccionario `region_names` con tal de reemplazar los nombres de las regiones en los datos `fallecidos` para poder unir con los otros datos.
region_names = {
"Araucanía": "La Araucanía",
"Aysén": "Aysén del General Carlos Ibáñez del Campo",
"Magallanes": "Magallanes y de la Antártica Chilena",
"Metropolitana": "Metropolitana de Santiago",
"O’Higgins": "Libertador General B<NAME>",
}
fallecidos_reg=fallecidos.groupby("region").agg(num_fallecidos=('fallecidos','max')).drop("Total",axis=0)
fallecidos_reg=fallecidos_reg.rename(region_names)
activos_reg=activos.groupby("codigo_region").agg(prom_activos = ('casos_activos','mean'))
confirmados_reg=confirmados.groupby("region").agg(max_confirmados=('casos_confirmados',"max"),codigo_region=("region_id","mean"))
aux=pd.merge(fallecidos_reg,confirmados_reg.reset_index(), how="right",on="region")
resumen_reg=pd.merge(activos_reg,aux, how="right",on="codigo_region").drop("codigo_region",axis=1)
resumen_reg
display(resumen_reg.plot(x="region",kind='bar',
title="Promedio de casos activos, Maximo de casos confirmados y Total de fallecidos por región",
figsize=(16,10),xlabel="Regiones",ylabel="Cantidad"))
# **Comentarios:** Claramente la región Metropolitana ha sido la mas afectada, no solo tiene la cantidad de contagiados mas alta, sino que tambien tiene la cantidad mas alta de fallecidos. Para todas las regiones la cantidad de activos es muy baja (la escala del grafico ni siquiera permite ver la cantidad, por lo que recomendaria graficar en otro grafico)
# ## Ejercicio 9
#
#
# En este ejercicio buscaremos realizar un mini-dashboard respecto al estado de los casos de COVID-19 en Chile, por lo tanto utilizaremos haremos uso de datos geográficos de manera operacional (es decir, no nos preocuparemos de proyecciones en mapas ni nada por el estilo), lo único es que debes instalar `geopandas` en tu ambiente virtual y no olvidar actualizarlo en tu `environment.yml` para luego subirlo a tu repositorio de GitHub.
#
# Con tu ambiente activo (`conda activate mat281`) basta con ejecutar `conda install -c conda-forge geopandas` para instalar `geopandas`.
import geopandas as gpd
from pathlib import Path
shp_filepath = Path().resolve().parent / "data" / "regiones_chile.shp"
regiones = gpd.read_file(shp_filepath)
regiones.head()
type(regiones)
# Lo único que tienes que saber es que un `GeoDataFrame` es idéntico a un `DataFrame` salvo que debe poseer una columna llamada `geometry` caracterice los elementros geométricos, que en este casos son polígonos con los límites de las regiones de Chile.
#
# Para graficar mapas en Altair se debe usar `mark_geoshape`, además, para no preocuparnos de las proyecciones si o si debes declarar lo siguiente que se muestra en la siguiente celda en las propiedades del gráfico. El resto es igual a cualquier otro gráfico de Altair.
alt.Chart(regiones).mark_geoshape().encode(
).properties(
projection={'type': 'identity', 'reflectY': True},
width=250,
height=600
)
# ### Ejercicio 9.1
#
# (10 puntos)
#
# Define el `DataFrame` con el nombre `casos_geo` tal que tenga las columnas
#
# * `region`
# * `codigo_region`
# * `fecha`
# * `poblacion`
# * `casos_confirmados`
# * `tasa`
# * `casos_activos`
# * `fallecidos`
# * `geometry`
#
# Ten mucho cuidado como unes los dataframes `confirmados`, `activos`, `fallecidos` y `regiones`. Idealmente utilizar el código de región, pero en caso que no se encuentren disponibles utilizar el nombre de la región (no olivdar utilizar el diccionario `region_names`).
confirmados=confirmados.rename(columns={'region_id': 'codigo_region'})
confirmados["codigo_region"]=confirmados["codigo_region"].astype('int')
regiones=regiones.rename(columns={'codregion': 'codigo_region'})
casos_geo = ( confirmados[["region","codigo_region","fecha","poblacion","casos_confirmados","tasa"]]
.merge(activos[["codigo_region","fecha","casos_activos"]], how='inner',on=['fecha','codigo_region'])
.merge(fallecidos, how='inner', on=['region',"fecha"])
.merge(regiones[['codigo_region','geometry']], how='inner', on=["codigo_region"])
)
# Ejecuta lo siguiente para convertir el DataFrame anterior en un GeoDataFrames
casos_geo = casos_geo.pipe(lambda x: gpd.GeoDataFrame(x, geometry="geometry"))
# ### Ejercicio 9.2
#
# (5 puntos)
#
# Modifica la función `covid_chile_chart` tal que reciba una fecha y una columna. Luego, debe filtrar `casos_geo` con registros de la fecha seleccionada y graficar un mapa donde las regiones se colereen según la columna escogida.
def covid_chile_chart(fecha, col):
fecha = pd.to_datetime(fecha)
data = casos_geo.loc[lambda df: df["fecha"]==fecha][:10]
chart = alt.Chart(data).mark_geoshape().encode(
color=col
).properties(
projection={'type': 'identity', 'reflectY': True},
width=150,
height=400
)
chart.display()
return
# Prueba con lo siguiente
fecha = "2020-04-13"
col = "tasa"
covid_chile_chart(fecha, col)
# ### Ejercicio 9.3
#
# (5 puntos)
#
# Ahora utilizando `widgets` generaremos el dashboard interactivo. Define lo siguiente:
#
# * col_widget: Un `widgets.Dropdown` donde las opciones a seleccionar sean las columnas `poblacion`, `casos_confirmados`, `tasa`, `casos_activos` y `fallecidos`. Además, el argumento `description` debe ser `Columna`.
# * fecha_widget: Un `widgets.DatePicker` donde el argumento `description` sea `Fecha`.
# * Ambos widgets deben tener el argumento `continuous_update=False`
import ipywidgets as widgets
from ipywidgets import interactive, interact
col_widget = widgets.Dropdown(
options=['poblacion','casos_confirmados','tasa','casos_activos','fallecidos'],
description='Columna',
continuous_update = False
)
fecha_widget = widgets.DatePicker(
description='Fecha',
continuous_update = False
)
# Finalmente, haciendo uso de `interactive`, la función `covid_chile_chart` y todos los widgets es posible crear un _dashboard_ interactivo con los datos de Covid-19.
#
# Respira profundo y explora tu creación!
covid_dashboard = interactive(
covid_chile_chart,
fecha=fecha_widget,
col=col_widget
)
covid_dashboard
# **Comentarios:** Se ve interesante pero pide muchos recursos
| homeworks/hwk02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Question 1 of Assignment 3
alt = input("Enter the Altitude = ")
alt = int(alt)
print(alt)
if alt <= 1000:
print("Pilot Can Land Safely")
elif alt <= 5000:
print("Bring the Alitude to 1000ft")
else:
print("Turn Around and Try Again..")
# -
alt = input("Enter the Altitude")
alt = int(alt)
print(alt)
if alt <= 1000:
print("Pilot Can Land Safely")
elif alt <= 5000:
print("Bring the Alitude to 1000ft")
else:
print("Turn Around and Try Again..")
alt = input("Enter the Altitude")
alt = int(alt)
print(alt)
if alt <= 1000:
print("Pilot Can Land Safely")
elif alt <= 5000:
print("Bring the Alitude to 1000ft")
else:
print("Turn Around and Try Again..")
# +
# Question 2 of Assignment 3
# Python code to print prime numbers upto (1,200)
# +
x = 1
y = 200
print("Prime numbers between", x, "and", y, "are:")
for num in range(x, y + 1):
if num > 1:
for i in range(2, num):
if (num % i) == 0:
break
else:
print(num)
# -
| Assignment Day 3 (2) (1).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="copyright"
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="4f82ca678df6"
# Notebook is a revised version of notebook from [<NAME>](https://github.com/RajeshThallam/vertex-ai-labs/blob/main/07-vertex-train-deploy-lightgbm/vertex-train-deploy-lightgbm-model.ipynb)
# + [markdown] id="JAPoU8Sm5E6e"
# # E2E ML on GCP: MLOps stage 2 : experimentation: get started with Vertex AI Training for LightGBM
#
# <table align="left">
#
# <td>
# <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/ml_ops/stage2/get_started_vertex_training_lightgbm.ipynb">
# <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
# </a>
# </td>
# <td>
# <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/ocommunity/ml_ops/stage2/get_started_vertex_training_lightgbm.ipynb">
# <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
# View on GitHub
# </a>
# </td>
# <td>
# <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage2/get_started_vertex_training_lightgbm.ipynb">
# <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
# Run in Vertex Workbench
# </a>
# </td>
# </table>
# + [markdown] id="overview:mlops"
# ## Overview
#
#
# This tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 2 : experimentation: get started with Vertex AI Training for LightGBM.
# + [markdown] id="dataset:iris,lcn"
# ### Dataset
#
# The dataset used for this tutorial is the [Iris dataset](https://www.tensorflow.org/datasets/catalog/iris) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor.
# + [markdown] id="objective:mlops,stage2,get_started_vertex_training_xgboost"
# ### Objective
#
# In this tutorial, you learn how to use `Vertex AI Training` for training a LightGBM custom model.
#
# This tutorial uses the following Google Cloud ML services:
#
# - `Vertex AI Training`
# - `Vertex AI Model` resource
#
# The steps performed include:
#
# - Training using a Python package.
# - Save the model artifacts to Cloud Storage using GCSFuse.
# - Construct a FastAPI prediction server.
# - Construct a Dockerfile deployment image.
# - Test the deployment image locally.
# - Create a `Vertex AI Model` resource.
#
# ### Costs
#
# This tutorial uses billable components of Google Cloud:
#
# * Vertex AI
# * Cloud Storage
#
# Learn about [Vertex AI
# pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
# pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
# Calculator](https://cloud.google.com/products/calculator/)
# to generate a cost estimate based on your projected usage.
# + [markdown] id="setup_local"
# ### Set up your local development environment
#
# If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
#
# Otherwise, make sure your environment meets this notebook's requirements. You need the following:
#
# - The Cloud Storage SDK
# - Git
# - Python 3
# - virtualenv
# - Jupyter notebook running in a virtual environment with Python 3
#
# The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
#
# 1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).
#
# 2. [Install Python 3](https://cloud.google.com/python/setup#installing_python).
#
# 3. [Install virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.
#
# 4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell.
#
# 5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.
#
# 6. Open this notebook in the Jupyter Notebook Dashboard.
#
# + [markdown] id="install_aip:mbsdk"
# ## Installation
#
# Install the following packages to execute this notebook.
# + id="install_aip:mbsdk"
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
# ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG -q
# ! pip3 install -U google-cloud-storage $USER_FLAG -q
# ! pip3 install -U lightgbm $USER_FLAG -q
# + id="install_tensorflow"
if os.getenv("IS_TESTING"):
# ! pip3 install --upgrade tensorflow $USER_FLAG
# + [markdown] id="restart"
# ### Restart the kernel
#
# Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
# + id="restart"
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
# + [markdown] id="before_you_begin:nogpu"
# ## Before you begin
#
# ### GPU runtime
#
# This tutorial does not require a GPU runtime.
#
# ### Set up your Google Cloud project
#
# **The following steps are required, regardless of your notebook environment.**
#
# 1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
#
# 2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
#
# 3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component,storage-component.googleapis.com)
#
# 4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)).
#
# 5. Enter your project ID in the cell below. Then run the cell to make sure the
# Cloud SDK uses the right project for all the commands in this notebook.
#
# **Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
# + [markdown] id="project_id"
# #### Set your project ID
#
# **If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
# + id="set_project_id"
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
# + id="autoset_project_id"
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
# + id="set_gcloud_project_id"
# ! gcloud config set project $PROJECT_ID
# + [markdown] id="region"
# #### Region
#
# You can also change the `REGION` variable, which is used for operations
# throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
#
# - Americas: `us-central1`
# - Europe: `europe-west4`
# - Asia Pacific: `asia-east1`
#
# You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
#
# Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
# + id="region"
REGION = "[your-region]" # @param {type:"string"}
if REGION == "[your-region]":
REGION = "us-central1" # @param {type: "string"}
# + [markdown] id="timestamp"
# #### Timestamp
#
# If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
# + id="timestamp"
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# + [markdown] id="gcp_authenticate"
# ### Authenticate your Google Cloud account
#
# **If you are using Vertex AI Workbench Notebooks**, your environment is already authenticated. Skip this step.
#
# **If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
#
# **Otherwise**, follow these steps:
#
# In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
#
# **Click Create service account**.
#
# In the **Service account name** field, enter a name, and click **Create**.
#
# In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
#
# Click Create. A JSON file that contains your key downloads to your local environment.
#
# Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
# + id="gcp_authenticate"
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = "google.colab" in sys.modules
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
# %env GOOGLE_APPLICATION_CREDENTIALS ''
# + [markdown] id="bucket:mbsdk"
# ### Create a Cloud Storage bucket
#
# **The following steps are required, regardless of your notebook environment.**
#
# When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
#
# Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
# + id="bucket"
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
# + id="autoset_bucket"
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
BUCKET_URI = f"gs://{BUCKET_NAME}"
# + [markdown] id="create_bucket"
# **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
# + id="create_bucket"
# ! gsutil mb -l $REGION $BUCKET_URI
# + [markdown] id="validate_bucket"
# Finally, validate access to your Cloud Storage bucket by examining its contents:
# + id="validate_bucket"
# ! gsutil ls -al $BUCKET_URI
# + [markdown] id="setup_vars"
# ### Set up variables
#
# Next, set up some variables used throughout the tutorial.
# ### Import libraries and define constants
# + id="import_aip:mbsdk"
import google.cloud.aiplatform as aiplatform
import lightgbm
# + id="a3c0380967f5"
print(f"LightGBM version {lightgbm.__version__}")
# + [markdown] id="init_aip:mbsdk"
# ## Initialize Vertex SDK for Python
#
# Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
# + id="init_aip:mbsdk"
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)
# + [markdown] id="gar_enable_api"
# ### Enable Artifact Registry API
#
# First, you must enable the Artifact Registry API service for your project.
#
# Learn more about [Enabling service](https://cloud.google.com/artifact-registry/docs/enable-service).
# + id="gar_enable_api"
# ! gcloud services enable artifactregistry.googleapis.com
# + [markdown] id="gar_create_repo"
# ### Create a private Docker repository
#
# Your first step is to create your own Docker repository in Google Artifact Registry.
#
# 1. Run the `gcloud artifacts repositories create` command to create a new Docker repository with your region with the description "docker repository".
#
# 2. Run the `gcloud artifacts repositories list` command to verify that your repository was created.
# + id="gar_create_repo"
PRIVATE_REPO = "prediction"
# ! gcloud artifacts repositories create {PRIVATE_REPO} --repository-format=docker --location={REGION} --description="Prediction repository"
# ! gcloud artifacts repositories list
# + [markdown] id="gar_auth"
# ### Configure authentication to your private repo
#
# Before you push or pull container images, configure Docker to use the `gcloud` command-line tool to authenticate requests to `Artifact Registry` for your region.
# + id="gar_auth"
# ! gcloud auth configure-docker {REGION}-docker.pkg.dev --quiet
# + [markdown] id="container:training,prediction,xgboost"
# #### Set pre-built containers
#
# Set the pre-built Docker container image for training and prediction.
#
#
# For the latest list, see [Pre-built containers for training](https://cloud.google.com/ai-platform-unified/docs/training/pre-built-containers).
#
#
# For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
# + id="container:training,prediction,xgboost"
TRAIN_VERSION = "scikit-learn-cpu.0-23"
DEPLOY_VERSION = "lightgbm-cpu"
# prebuilt
TRAIN_IMAGE = "{}-docker.pkg.dev/vertex-ai/training/{}:latest".format(
REGION.split("-")[0], TRAIN_VERSION
)
DEPLOY_IMAGE = "{}-docker.pkg.dev/{}/{}/{}:latest".format(
REGION, PROJECT_ID, PRIVATE_REPO, DEPLOY_VERSION
)
print("Deploy image:", DEPLOY_IMAGE)
# + [markdown] id="machine:training,prediction"
# #### Set machine type
#
# Next, set the machine type to use for training and prediction.
#
# - Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction.
# - `machine type`
# - `n1-standard`: 3.75GB of memory per vCPU.
# - `n1-highmem`: 6.5GB of memory per vCPU
# - `n1-highcpu`: 0.9 GB of memory per vCPU
# - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]
#
# *Note: The following is not supported for training:*
#
# - `standard`: 2 vCPUs
# - `highcpu`: 2, 4 and 8 vCPUs
#
# *Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
# + id="machine:training,prediction"
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
# + [markdown] id="examine_training_package:xgboost"
# ### Examine the training package
#
# #### Package layout
#
# Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
#
# - PKG-INFO
# - README.md
# - setup.cfg
# - setup.py
# - trainer
# - \_\_init\_\_.py
# - task.py
#
# The files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.
#
# The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`).
#
# #### Package Assembly
#
# In the following cells, you will assemble the training package.
# + id="f4wS4eISox9V"
# Make folder for Python training script
# ! rm -rf custom
# ! mkdir custom
# Add package information
# ! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
# ! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n'lightgbm' ],\n\n packages=setuptools.find_packages())"
# ! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Iris tabular classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: <EMAIL>\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
# ! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
# ! mkdir custom/trainer
# ! touch custom/trainer/__init__.py
# + [markdown] id="taskpy_contents:iris,xgboost"
# ### Create the task script for the Python training package
#
# Next, you create the `task.py` script for driving the training package. Some noteable steps include:
#
# - Command-line arguments:
# - `model-dir`: The location to save the trained model. When using Vertex AI custom training, the location will be specified in the environment variable: `AIP_MODEL_DIR`
#
# - Data preprocessing (`get_data()`):
# - Download the dataset and split into training and test.
# - Training (`train_model()`):
# - Trains the model
# - Model artifact saving
# - Saves the model artifacts and evaluation metrics where the Cloud Storage location specified by `model-dir`.
# + id="taskpy_contents:xgboost,iris"
# %%writefile custom/trainer/task.py
# Single Instance Training for Iris
import datetime
import os
import subprocess
import sys
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_iris
import pandas as pd
import lightgbm as lgb
import argparse
import logging
logging.getLogger().setLevel(logging.INFO)
logging.info("Parsing arguments")
parser = argparse.ArgumentParser()
parser.add_argument(
'--model-dir',
dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'),
type=str,
help='Location to export GCS model')
args = parser.parse_args()
logging.info(args)
def get_data():
# Download data
logging.info("Downloading data")
iris = load_iris()
print(iris.data.shape)
# split data
print("Splitting data into test and train")
x, y = iris.data, iris.target
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=123)
# create dataset for lightgbm
print("creating dataset for LightGBM")
lgb_train = lgb.Dataset(x_train, y_train)
lgb_eval = lgb.Dataset(x_test, y_test, reference=lgb_train)
return lgb_train, lgb_eval
def train_model(lgb_train, lg_eval):
# specify your configurations as a dict
params = {
'boosting_type': 'gbdt',
'objective': 'multiclass',
'metric': {'multi_error'},
'num_leaves': 31,
'learning_rate': 0.05,
'feature_fraction': 0.9,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'verbose': 0,
'num_class' : 3
}
# train lightgbm model
logging.info('Starting training...')
model = lgb.train(params,
lgb_train,
num_boost_round=20,
valid_sets=lgb_eval,
early_stopping_rounds=5)
return model
lgb_train, lgb_eval = get_data()
model = train_model(lgb_train, lgb_eval)
# GCSFuse conversion
gs_prefix = 'gs://'
gcsfuse_prefix = '/gcs/'
if args.model_dir.startswith(gs_prefix):
args.model_dir = args.model_dir.replace(gs_prefix, gcsfuse_prefix)
dirpath = os.path.split(args.model_dir)[0]
if not os.path.isdir(dirpath):
os.makedirs(dirpath)
# save model to file
logging.info('Saving model...')
model_filename = 'model.txt'
gcs_model_path = os.path.join(args.model_dir, model_filename)
model.save_model(gcs_model_path)
# + [markdown] id="tarball_training_script"
# #### Store training script on your Cloud Storage bucket
#
# Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
# + id="dnmdycf6ox9X"
# ! rm -f custom.tar custom.tar.gz
# ! tar cvf custom.tar custom
# ! gzip custom.tar
# ! gsutil cp custom.tar.gz $BUCKET_URI/trainer_iris.tar.gz
# + [markdown] id="create_custom_pp_training_job:mbsdk"
# ### Create and run custom training job
#
#
# To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
#
# #### Create custom training job
#
# A custom training job is created with the `CustomTrainingJob` class, with the following parameters:
#
# - `display_name`: The human readable name for the custom training job.
# - `container_uri`: The training container image.
#
# - `python_package_gcs_uri`: The location of the Python training package as a tarball.
# - `python_module_name`: The relative path to the training script in the Python package.
#
# *Note:* There is no requirements parameter. You specify any requirements in the `setup.py` script in your Python package.
# + id="rVEMz1xqox9X"
DISPLAY_NAME = "iris_" + TIMESTAMP
job = aiplatform.CustomPythonPackageTrainingJob(
display_name=DISPLAY_NAME,
python_package_gcs_uri=f"{BUCKET_URI}/trainer_iris.tar.gz",
python_module_name="trainer.task",
container_uri=TRAIN_IMAGE,
project=PROJECT_ID,
)
# + [markdown] id="prepare_custom_cmdargs:iris,xgboost"
# ### Prepare your command-line arguments
#
# Now define the command-line arguments for your custom training container:
#
# - `args`: The command-line arguments to pass to the executable that is set as the entry point into the container.
# - `--model-dir` : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.
# - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable `DIRECT = True`), or
# - indirect: The service passes the Cloud Storage location as the environment variable `AIP_MODEL_DIR` to your training script (set variable `DIRECT = False`). In this case, you tell the service the model artifact location in the job specification.
# + id="AoUfpBqVox9Y"
MODEL_DIR = "{}/{}".format(BUCKET_URI, TIMESTAMP)
DIRECT = False
if DIRECT:
CMDARGS = [
"--model_dir=" + MODEL_DIR,
]
else:
CMDARGS = []
# + [markdown] id="run_custom_job:mbsdk"
# #### Run the custom training job
#
# Next, you run the custom job to start the training job by invoking the method `run`, with the following parameters:
#
# - `args`: The command-line arguments to pass to the training script.
# - `replica_count`: The number of compute instances for training (replica_count = 1 is single node training).
# - `machine_type`: The machine type for the compute instances.
# - `accelerator_type`: The hardware accelerator type.
# - `accelerator_count`: The number of accelerators to attach to a worker replica.
# - `base_output_dir`: The Cloud Storage location to write the model artifacts to.
# - `sync`: Whether to block until completion of the job.
# + id="JCruQq1aox9Y"
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=False,
)
model_path_to_deploy = MODEL_DIR + "/model"
# + [markdown] id="list_job"
# ### List a custom training job
# + id="KBM_KLMSox9Y"
_job = job.list(filter=f"display_name=iris_{TIMESTAMP}")
print(_job)
# + [markdown] id="custom_job_wait:mbsdk"
# ### Wait for completion of custom training job
#
# Next, wait for the custom training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` method to block until the custom training job is completed.
# + id="lHPMHbSyox9Z"
job.wait()
# + [markdown] id="delete_job"
# ### Delete a custom training job
#
# After a training job is completed, you can delete the training job with the method `delete()`. Prior to completion, a training job can be canceled with the method `cancel()`.
# + id="tlYg7Sp-ox9Z"
job.delete()
# + [markdown] id="2478e67d679f"
# ### Verify the model artifacts
#
# Next, verify the training script successfully saved the trained model to your Cloud Storage location.
# + id="c644f0d3a201"
print(f"Model path with trained model artifacts {model_path_to_deploy}")
# ! gsutil ls $model_path_to_deploy
# + [markdown] id="import_model:migration,new"
# ## Deploy LightGBM model to Vertex AI Endpoint
#
# ### Build a HTTP Server with FastAPI
#
# Next, you use FastAPI to implement the HTTP server as a custom deployment container. The container must listen and respond to liveness checks, health checks, and prediction requests. The HTTP server must listen for requests on 0.0.0.0.
#
# Learn more about [deployment container requirements](https://cloud.google.com/ai-platform-unified/docs/predictions/custom-container-requirements#image).
#
# Learn more about [FastAPI](https://fastapi.tiangolo.com/).
# + id="b28b53f8b3f4"
# Make folder for serving script
# ! rm -rf serve
# ! mkdir serve
# Make the predictor subfolder
# ! mkdir serve/app
# + [markdown] id="8baa8361ecb4"
# ### Create the requirements file for the serving container
#
# Next, create the `requirements.txt` file for the server environment which specifies which Python packages need to be installed on the serving container.
# + id="a2dd3e671130"
# %%writefile serve/requirements.txt
numpy
scikit-learn>=0.24
pandas==1.0.4
lightgbm==3.2.1
google-cloud-storage>=1.26.0,<2.0.0dev
# + [markdown] id="07c16e1003ef"
# ### Write the FastAPI serving script
#
# Next, you write the serving script for the HTTP server using `FastAPI`, as follows:
#
# - `app`: Instantiate a `FastAPI` application
# - `health()`: Define the response to health request.
# - return status code 200
# - `predict()`: Define the response to the predict request.
# - `body = await request.json()`: Asynchronous wait for HTTP requests.
# - `instances = body["instances"]` : Content of the prediction request.
# - `inputs = np.asarray(instances)`: Reformats prediction request as a numpy array.
# - `outputs = model.predict(inputs)`: Invoke the model to make the predictions.
# - `return {"predictions": ... }`: Return formatted predictions in response body.
# + id="97b0cd77339c"
# %%writefile serve/app/main.py
from fastapi.logger import logger
from fastapi import FastAPI, Request
import numpy as np
import os
from sklearn.datasets import load_iris
import lightgbm as lgb
import logging
gunicorn_logger = logging.getLogger('gunicorn.error')
logger.handlers = gunicorn_logger.handlers
if __name__ != "main":
logger.setLevel(gunicorn_logger.level)
else:
logger.setLevel(logging.DEBUG)
app = FastAPI()
model_f = "/model/model.txt"
logger.info("Loading model")
_model = lgb.Booster(model_file=model_f)
logger.info("Loading target class labels")
_class_names = load_iris().target_names
@app.get(os.environ['AIP_HEALTH_ROUTE'], status_code=200)
def health():
""" health check to ensure HTTP server is ready to handle
prediction requests
"""
return {"status": "healthy"}
@app.post(os.environ['AIP_PREDICT_ROUTE'])
async def predict(request: Request):
body = await request.json()
instances = body["instances"]
inputs = np.asarray(instances)
outputs = _model.predict(inputs)
logger.info(f"Outputs {outputs}")
return {"predictions": [_class_names[class_num] for class_num in np.argmax(outputs, axis=1)]}
# + [markdown] id="ea37d0597e2b"
# ### Add pre-start script
# FastAPI will execute this script before starting up the server. The `PORT` environment variable is set to equal `AIP_HTTP_PORT` in order to run FastAPI on same the port expected by AI Platform (Unified).
# + id="e029a2d85935"
# %%writefile serve/app/prestart.sh
# #!/bin/bash
export PORT=$AIP_HTTP_PORT
# + [markdown] id="4a402c1f4bcc"
# ### Store test instances
#
# Next, you create synthetic examples to subsequently test the FastAPI server and the trained LightGBM model.
#
# Learn more about [JSON formatting of prediction requests for custom models](https://cloud.google.com/ai-platform-unified/docs/predictions/online-predictions-custom-models#request-body-details).
# + id="d45c868ccaed"
# %%writefile serve/instances.json
{
"instances": [
[6.7, 3.1, 4.7, 1.5],
[4.6, 3.1, 1.5, 0.2]
]
}
# + [markdown] id="5ae1b8aead6c"
# ## Build and push prediction container to Artifact Registry
#
# Write the Dockerfile, using `tiangolo/uvicorn-gunicorn-fastapi` as a base image. This will automatically run FastAPI for you using Gunicorn and Uvicorn.
#
# Learn more about [Deploying FastAPI with Docker](https://fastapi.tiangolo.com/deployment/docker/).
# + id="289dcdf1c4f4" magic_args="-s $MODEL_DIR" language="bash"
#
# MODEL_DIR=$1
#
# mkdir -p ./serve/model/
# gsutil cp -r ${MODEL_DIR}/model/ ./serve/model/
#
# cat > ./serve/Dockerfile <<EOF
# FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
#
# COPY ./app /app
# COPY ./model /
# COPY requirements.txt requirements.txt
#
# RUN pip3 install -r requirements.txt
#
# EXPOSE 7080
# CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7080"]
#
# EOF
# + [markdown] id="4f3bca44ead4"
# #### Build the container locally
#
# Next, you build and tag your custom deployment container.
# + id="64261b86085e"
if not IS_COLAB:
# ! docker build --tag={DEPLOY_IMAGE} ./serve
else:
# install docker daemon
# ! apt-get -qq install docker.io
# + [markdown] id="c746c507aba6"
# ### Run and test the container locally (optional)
#
# Run the container locally in detached mode and provide the environment variables that the container requires. These env vars will be provided to the container by AI Platform Prediction once deployed. Test the `/health` and `/predict` routes, then stop the running image.
# + [markdown] id="6534859e481b"
# Before push the container image to Artifact Registry to use it with Vertex Predictions, you can run it as a container in your local environment to verify that the server works as expected
# + id="3246b3a01d92"
if not IS_COLAB:
# ! docker rm local-iris 2>/dev/null
# ! docker run -t -d --rm -p 7080:7080 \
# --name=local-iris \
# -e AIP_HTTP_PORT=7080 \
# -e AIP_HEALTH_ROUTE=/health \
# -e AIP_PREDICT_ROUTE=/predict \
# -e AIP_STORAGE_URI={MODEL_DIR} \
# {DEPLOY_IMAGE}
# ! docker container ls
# ! sleep 10
# + [markdown] id="1bdfb565a253"
# #### Health check
#
# Send the container's server a health check. The output should be {"status": "healthy"}.
# + id="bdc1f41f8dcc"
if not IS_COLAB:
# ! curl http://localhost:7080/health
# + [markdown] id="c92e4992b2f4"
# If successful, the server returns the following response:
#
# ```
# {
# "status": "healthy"
# }
# ```
# + [markdown] id="5433a0c62a99"
# #### Prediction check
#
# Send the container's server a prediction request.
# + id="458de32ee979"
# ! curl -X POST \
# -d @serve/instances.json \
# -H "Content-Type: application/json; charset=utf-8" \
# http://localhost:7080/predict
# + [markdown] id="db78dc1f2313"
# This request uses a test sentence. If successful, the server returns prediction in below format:
#
# ```
# {"predictions":["versicolor","setosa"]}
# ```
# + [markdown] id="cc16b5fa1b6d"
# #### Stop the local container
#
# Finally, stop the local container.
# + id="c8127f3e5b74"
if not IS_COLAB:
# ! docker stop local-iris
# + [markdown] id="8c57c65c5d25"
# #### Push the serving container to Artifact Registry
#
# Push your container image with inference code and dependencies to your Artifact Registry.
# + id="8f1fc336bf79"
if not IS_COLAB:
# ! docker push $DEPLOY_IMAGE
# + [markdown] id="f50e9c553fb7"
# *Executes in Colab*
# + id="a7e8c98f1e56" magic_args="-s $IS_COLAB $DEPLOY_IMAGE" language="bash"
# if [ $1 == "False" ]; then
# exit 0
# fi
# set -x
# dockerd -b none --iptables=0 -l warn &
# for i in $(seq 5); do [ ! -S "/var/run/docker.sock" ] && sleep 2 || break; done
# docker build --tag={DEPLOY_IMAGE} ./serve
# docker push $2
# kill $(jobs -p)
# + [markdown] id="ac8627fc8434"
# #### Deploying the serving container to Vertex Predictions
# We create a model resource on Vertex AI and deploy the model to a Vertex Endpoints. You must deploy a model to an endpoint before using the model. The deployed model runs the custom container image to serve predictions.
# + [markdown] id="upload_model:mbsdk"
# ## Upload the model
#
# Next, upload your model to a `Model` resource using `Model.upload()` method, with the following parameters:
#
# - `display_name`: The human readable name for the `Model` resource.
# - `artifact`: The Cloud Storage location of the trained model artifacts.
# - `serving_container_image_uri`: The serving container image.
# - `serving_container_predict_route`: HTTP path to send prediction requests to the container.
# - `serving_container_health_route`: HTTP path to send health check requests to the container.
# - `serving_container_ports`: The ports exposed by the container to listen to requests.
# - `sync`: Whether to execute the upload asynchronously or synchronously.
#
# If the `upload()` method is run asynchronously, you can subsequently block until completion with the `wait()` method.
# + id="c34825f267d4"
APP_NAME = "iris"
model_display_name = f"{APP_NAME}-{TIMESTAMP}"
model_description = "LightGBM based iris flower classifier with custom container"
MODEL_NAME = APP_NAME
health_route = "/ping"
predict_route = "/predict"
serving_container_ports = [7080]
# + id="upload_model:mbsdk"
model = aiplatform.Model.upload(
display_name=model_display_name,
description=model_description,
serving_container_image_uri=DEPLOY_IMAGE,
serving_container_predict_route=predict_route,
serving_container_health_route=health_route,
serving_container_ports=serving_container_ports,
)
model.wait()
print(model.display_name)
print(model.resource_name)
# + [markdown] id="make_batch_predictions:migration"
# ## Make batch predictions
# + [markdown] id="batchpredictionjobs_create:migration,new,mbsdk"
# Documentation - [Batch prediction requests - Vertex AI](https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions)
# + [markdown] id="make_test_items:xgboost,tabular,iris"
# ### Make test items
#
# You will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
# + id="make_test_items:xgboost,tabular,iris"
INSTANCES = [[6.7, 3.1, 4.7, 1.5], [4.6, 3.1, 1.5, 0.2]]
# + [markdown] id="make_batch_file:custom,tabular,list"
# ### Make the batch input file
#
# Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a list of the form:
#
# [ [ content_1], [content_2] ]
#
# - `content`: The feature values of the test item as a list.
# + id="ae7342e19f2b"
import json
[json.dumps(record) for record in INSTANCES]
# + id="make_batch_file:custom,tabular,list"
import tensorflow as tf
gcs_input_uri = f"{BUCKET_URI}/{APP_NAME}/test/batch_input/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
for i in INSTANCES:
f.write(str(i) + "\n")
# ! gsutil cat $gcs_input_uri
# + [markdown] id="batch_request:mbsdk,jsonl,custom,cpu"
# ### Make the batch prediction request
#
# Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
#
# - `job_display_name`: The human readable name for the batch prediction job.
# - `gcs_source`: A list of one or more batch request input files.
# - `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.
# - `instances_format`: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
# - `predictions_format`: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
# - `machine_type`: The type of machine to use for training.
# - `sync`: If set to True, the call will block while waiting for the asynchronous batch job to complete.
# + id="batch_request:mbsdk,jsonl,custom,cpu"
MIN_NODES = 1
MAX_NODES = 1
batch_predict_job = model.batch_predict(
job_display_name=f"{APP_NAME}_TIMESTAMP",
gcs_source=gcs_input_uri,
gcs_destination_prefix=f"{BUCKET_URI}/{APP_NAME}/test/batch_output/",
instances_format="jsonl",
predictions_format="jsonl",
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=False,
)
print(batch_predict_job)
# + [markdown] id="batch_request_wait:mbsdk"
# ### Wait for completion of batch prediction job
#
# Next, wait for the batch job to complete. Alternatively, one can set the parameter `sync` to `True` in the `batch_predict()` method to block until the batch prediction job is completed.
# + id="batch_request_wait:mbsdk"
batch_predict_job.wait()
# + [markdown] id="get_batch_prediction:mbsdk,custom,lcn"
# ### Get the predictions
#
# Next, get the results from the completed batch prediction job.
#
# The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
#
# - `instance`: The prediction request.
# - `prediction`: The prediction response.
# + id="get_batch_prediction:mbsdk,custom,lcn"
import json
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
# + [markdown] id="make_online_predictions:migration"
# ## Make online predictions
# + [markdown] id="deploy_model:migration,new,mbsdk"
# Documentation - [Online prediction request](https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api)
# + [markdown] id="deploy_model:mbsdk,cpu"
# ## Deploy the model
#
# Next, deploy your model for online prediction. To deploy the model, you invoke the `deploy` method, with the following parameters:
#
# - `deployed_model_display_name`: A human readable name for the deployed model.
# - `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
# If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
# If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
# - `machine_type`: The type of machine to use for training.
# - `starting_replica_count`: The number of compute instances to initially provision.
# - `max_replica_count`: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
# + id="deploy_model:mbsdk,cpu"
DEPLOYED_NAME = f"{APP_NAME}-{TIMESTAMP}"
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
# + [markdown] id="predict_request:mbsdk,custom,lcn"
# ### Make the prediction
#
# Now that your `Model` resource is deployed to an `Endpoint` resource, you can do online predictions by sending prediction requests to the `Endpoint` resource.
#
# #### Request
#
# The format of each instance is:
#
# [feature_list]
#
# Since the predict() method can take multiple items (instances), send your single test item as a list of one test item.
#
# #### Response
#
# The response from the predict() call is a Python dictionary with the following entries:
#
# - `ids`: The internal assigned unique identifiers for each prediction request.
# - `predictions`: The predicted confidence, between 0 and 1, per class label.
# - `deployed_model_id`: The Vertex AI identifier for the deployed `Model` resource which did the predictions.
# + id="predict_request:mbsdk,custom,lcn"
instances_list = INSTANCES
prediction = endpoint.predict(instances_list)
print(prediction)
# + [markdown] id="undeploy_model:mbsdk"
# ## Undeploy the model
#
# When you are done doing predictions, you undeploy the model from the `Endpoint` resouce. This deprovisions all compute resources and ends billing for the deployed model.
# + id="undeploy_model:mbsdk"
endpoint.undeploy_all()
# + [markdown] id="gar_delete_repo"
# ### Deleting your private Docker repostory
#
# Finally, once your private repository becomes obsolete, use the command `gcloud artifacts repositories delete` to delete it `Google Artifact Registry`.
# + id="gar_delete_repo"
# ! gcloud artifacts repositories delete {PRIVATE_REPO} --location={REGION} --quiet
# + [markdown] id="cleanup:mbsdk"
# # Cleaning up
#
# To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
# project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
#
# Otherwise, you can delete the individual resources you created in this tutorial.
#
# + id="cleanup:mbsdk"
# Delete the model using the Vertex model object
try:
model.delete()
endpoint.delete()
except Exception as e:
print(e)
delete_bucket = True
if delete_bucket or os.getenv("IS_TESTING"):
# ! gsutil rm -r $BUCKET_URI
| notebooks/community/ml_ops/stage2/get_started_vertex_training_lightgbm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # IF
# ## 条件测试
# IF语句的核心条件是一个值为True或者False的表达式,这种表达式成为条件测试。
# Python根据条件测试的值来决定是否执行If语句中的代码,如果值为True就执行,反之则不执行。
## 检查值是否相等
var = 'a'
print( var.upper =='a'.upper)
## IF语句
var = 'a'
if var == 'a':
print(True)
## IF-ELSE语句
var = 'a'
if var == 'b':
print(True)
else:
print(False)
# +
## IF-ELIF-ElSE语句
var = 'variable'
### 单个ELIF
if var == 'v':
print(True)
elif len(var) > 1:
print(len(var))
else:
print(False)
### 多个ELIF
if var == 'v':
print(True)
elif len(var) < 1:
print(len(var))
### 查找 var变量中是否包含‘a’
elif var.find('a'):
print('A')
else:
print(False)
### 省略ELSE语句
if var == 'v':
print(True)
elif len(var) < 1:
print(len(var))
### 查找 var变量中是否包含‘a’
elif var.find('a'):
print('A')
# +
## 使用IF处理列表
list = ['a', 'b', 'c', 'd', 'e']
if len(list) != 0:
for var in list:
if var == 'a':
print(list[0])
| Part1/No5.ipynb |
# # Whole-cell currents recorded in DRN SOM neurons
#
# Shown in fig. 1.
from common import colors, sbarlw, insetlw
import os
os.chdir(os.path.join('..', '..'))
print(os.getcwd())
# +
from __future__ import division
import pickle
import warnings
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gs
from mpl_toolkits.axes_grid1.inset_locator import inset_axes, mark_inset
import seaborn as sns
from scipy import stats
from scipy.signal import find_peaks
import pandas as pd
from ezephys.rectools import ABFLoader
from grr.cell_class import (
subtract_baseline,
subtract_leak,
)
from grr import pltools
from grr.CurveFit import fit_decay_curve, fit_gating_curve, plot_linear_fit
from grr.Tools import dashedBorder, timeToIndex, getIndicesByPercentile, stripNan
# +
IMG_PATH = None
NOTEBOOK_PATH = os.path.join('figs', 'scripts')
plt.style.use(os.path.join(NOTEBOOK_PATH, 'publication_figure_style.dms'))
# +
# Load V-steps files
loader = ABFLoader()
GABA_PATH = os.path.join('data', 'raw', 'GABA')
# Load gating data
gating = []
fnames = {
'matched_I_V_steps': [
'18n22000.abf',
'18n22003.abf',
'18n22005.abf',
'18n22008.abf',
'18n22013.abf',
'18n22017.abf',
],
'unmatched_V_steps': ['18n16004.abf', '18n16005.abf', '18n22015.abf'],
'DRN393_firing_vsteps': ['18n16003.abf'],
'DRN398_firing_vsteps': ['18n16015.abf'],
}
for dir_ in fnames:
gating.extend(
loader.load(
[os.path.join(GABA_PATH, dir_, fname) for fname in fnames[dir_]]
)
)
del fnames, dir_
# -
gating[0].plot()
print(gating[0][1, 0, :])
print(gating[0][1, 20000, :])
print(gating[0][1, 40000, :])
# # Subtract baseline/leak
gating_tmp = []
for rec in gating:
baseline_subtracted = subtract_baseline(rec, slice(1000, 2000), 0)
leak_subtracted = subtract_leak(baseline_subtracted, slice(1000, 2000), slice(3000, 3400))
gating_tmp.append(leak_subtracted)
gating = gating_tmp
del gating_tmp, baseline_subtracted, leak_subtracted
# # Fit decay
# + FIT INACTIVATION TAU AND SAVE
step_amplitude = 50 # 70mV (-70 to -20)
IA_inactivation = {
'traces': [],
'fitted_data': [],
'fitted_curves': [],
'range_fitted': [],
'inactivation_taus': [],
'peak_latencies': [],
'peak_amplitudes': [],
'steady_state_amplitudes': [],
'peak_exists': []
}
peak_fit_params = {
'stimulus_start_time': 2606.2,
'steady_state_time': 3606.2,
'peak_slice': slice(26090, 30000),
'decay_slice_end': 50000
}
for i in range(len(gating)):
peak_inds = find_peaks(
gating[i][0, peak_fit_params['peak_slice'], -1],
distance=500,
prominence=50.,
width=100
)[0]
if len(peak_inds) > 0:
if len(peak_inds) > 1:
warnings.warn(
'{} peaks detected for recording {}.'.format(len(peak_inds), i)
)
peak_exists = True
peak_ind = np.argmax(gating[i][0, peak_fit_params['peak_slice'], -1])
else:
peak_exists = False
IA_inactivation['peak_exists'].append(peak_exists)
# Find peak latency
if peak_exists:
IA_inactivation['peak_latencies'].append(
peak_ind * 0.1
+ peak_fit_params['peak_slice'].start * 0.1
- peak_fit_params['stimulus_start_time']
)
else:
IA_inactivation['peak_latencies'].append(np.nan)
# Get steady-state amplitude
steady_state_amplitude = gating[i][0, timeToIndex(peak_fit_params['steady_state_time'], 0.1)[0], -1]
IA_inactivation['steady_state_amplitudes'].append(steady_state_amplitude)
# Get peak amplitude.
if peak_exists:
peak_amplitude = gating[i][0, peak_ind + peak_fit_params['peak_slice'].start, -1]
IA_inactivation['peak_amplitudes'].append(peak_amplitude)
else:
IA_inactivation['peak_amplitudes'].append(np.nan)
# Fit decay
if peak_exists:
# Convert decay_slice to eighty twenty range
decay_slice = slice(peak_ind + peak_fit_params['peak_slice'].start, peak_fit_params['decay_slice_end'])
decay_slice = getIndicesByPercentile(gating[i][0, decay_slice, -1], [0.80, 0.20]) + decay_slice.start # Get 80-20 interval
decay_slice[1] = np.argmin(gating[i][0, decay_slice[0]:decay_slice[1], -1]) + decay_slice[0] # Truncate to min.
decay_slice += peak_ind
decay_slice = slice(decay_slice[0], decay_slice[1])
plt.plot(gating[i][0, decay_slice, -1])
plt.show()
IA_inactivation['range_fitted'].append([decay_slice.start * 0.1, decay_slice.stop * 0.1])
t_vec = np.arange(0, gating[i].shape[1], 0.1)[:gating[i].shape[1]]
fitted_tmp = gating[i][0, decay_slice, -1]
IA_inactivation['fitted_data'].append(np.array(
[fitted_tmp,
t_vec[decay_slice]]
))
IA_inactivation['traces'].append(np.array([gating[i][0, :, -1], gating[i][1, :, -1], t_vec]))
p_tmp, fitted_tmp = fit_decay_curve(
fitted_tmp,
[fitted_tmp[0] - fitted_tmp[-1], fitted_tmp[-1], 20],
dt = 0.1
)
IA_inactivation['inactivation_taus'].append(p_tmp[2])
IA_inactivation['fitted_curves'].append(np.array(
[fitted_tmp[0, :],
np.linspace(decay_slice.start * 0.1, decay_slice.stop * 0.1, len(fitted_tmp[0, :]))]
))
# Diagnostic plot of decay fit.
plt.figure()
plt.axhline(peak_amplitude)
plt.axvline((peak_ind + peak_fit_params['peak_slice'].start) * 0.1)
plt.plot(t_vec, gating[i][0, :, -1], 'k-', lw=0.5)
#plt.plot(t_vec[peak_fit_params['decay_slice']], gating[i][0, peak_fit_params['decay_slice'], -1], 'r-')
plt.plot(
np.linspace(
decay_slice.start * 0.1, decay_slice.stop * 0.1, len(fitted_tmp[0, :])
),
fitted_tmp[0, :], '--', color = 'gray', lw=4
)
plt.xlim(2600, 4000)
plt.show()
else:
IA_inactivation['range_fitted'].append([np.nan, np.nan])
IA_inactivation['traces'].append(np.array([gating[i][0, :, -1], gating[i][1, :, -1], gating[i].time_supp]))
IA_inactivation['inactivation_taus'].append(np.nan)
IA_inactivation['fitted_curves'].append(None)
for key in IA_inactivation:
if key != 'fitted_data':
IA_inactivation[key] = np.array(IA_inactivation[key])
print('IA inactivation tau {:.1f} +/- {:.1f} (mean +/- SD)'.format(
IA_inactivation['inactivation_taus'].mean(), IA_inactivation['inactivation_taus'].std()
))
#with open(PROCESSED_PATH + 'inactivation_fits.dat', 'wb') as f:
# pickle.dump(IA_inactivation, f)
# -
print(np.sum(IA_inactivation['peak_exists']) / len(IA_inactivation['peak_exists']))
IA_inactivation.keys()
for dataset in ['peak_amplitudes', 'peak_latencies', 'steady_state_amplitudes', 'inactivation_taus']:
print('{:>25} {:>10.3} +/- {:>5.3}'.format(
dataset,
np.nanmean(IA_inactivation[dataset]),
stats.sem(IA_inactivation[dataset], nan_policy='omit')
))
# ## Summary statistics for quantities in nS
print('Peak amplitudes (nS) {:>20.3} +/- {:>5.3}'.format(
np.nanmean(IA_inactivation['peak_amplitudes'] / step_amplitude),
stats.sem(IA_inactivation['peak_amplitudes'] / step_amplitude, nan_policy='omit')
))
print('Steady state amplitudes (nS) {:>20.3} +/- {:>5.3}'.format(
np.nanmean(IA_inactivation['steady_state_amplitudes'] / step_amplitude),
stats.sem(IA_inactivation['steady_state_amplitudes'] / step_amplitude, nan_policy='omit')
))
IA_inactivation['steady_state_conductance'] = IA_inactivation['steady_state_amplitudes'] / step_amplitude
IA_inactivation['peak_conductance'] = IA_inactivation['peak_amplitudes'] / step_amplitude
statistics = [
'peak_latencies',
'steady_state_amplitudes',
'steady_state_conductance',
'peak_conductance',
'peak_amplitudes',
'inactivation_taus',
'peak_exists'
]
stats_df = pd.DataFrame({key: IA_inactivation[key] for key in statistics})
stats_df
stats_df.mean()
stats_df.sem()
stats_df.to_csv(os.path.join('data', 'processed', 'GABA', 'transient_current_parameters.csv'), index=False)
# # Figures
# +
fit_example_no = 8
bg_tr_alpha = 0.8
tr_spec = gs.GridSpec(2, 1, height_ratios=[1, 0.2], hspace=0, top=0.97, bottom=0.05, right=0.97, left=0.1)
plt.figure()
wc_ax = plt.subplot(tr_spec[0, :])
plt.plot(
(IA_inactivation['traces'][~IA_inactivation['peak_exists'], 2, :].T - 2606.2),
IA_inactivation['traces'][~IA_inactivation['peak_exists'], 0, :].T,
'-', color='gray', lw=0.5, alpha=bg_tr_alpha
)
plt.plot(
(IA_inactivation['traces'][IA_inactivation['peak_exists'], 2, :].T - 2606.2),
IA_inactivation['traces'][IA_inactivation['peak_exists'], 0, :].T,
'-', color=colors['som'], lw=0.5, alpha=bg_tr_alpha
)
plt.xlim(-50, 1550)
plt.ylim(-100, 1600)
pltools.add_scalebar(y_units='pA', y_size=500, omit_x=True, anchor=(-0.05, 0.1), linewidth=sbarlw)
sns.despine(ax=plt.gca(), trim=True)
wc_ins = inset_axes(wc_ax, '60%', '50%', loc='upper right', borderpad=1)
plt.plot(
(IA_inactivation['traces'][~IA_inactivation['peak_exists'], 2, :].T - 2606.2),
IA_inactivation['traces'][~IA_inactivation['peak_exists'], 0, :].T,
'-', color='gray', lw=0.5, alpha=bg_tr_alpha
)
plt.plot(
(IA_inactivation['traces'][IA_inactivation['peak_exists'], 2, :].T - 2606.2),
IA_inactivation['traces'][IA_inactivation['peak_exists'], 0, :].T,
'-', color=colors['som'], lw=0.5, alpha=bg_tr_alpha
)
plt.xlim(-20, 300)
plt.ylim(-50, 1300)
pltools.add_scalebar(x_units='ms', x_size=100, omit_y=True, anchor=(0.95, 0.7), x_label_space=0.05, remove_frame=False, linewidth=sbarlw)
plt.xticks([])
plt.yticks([])
dashedBorder(wc_ins, lw=insetlw)
mark_inset(wc_ax, wc_ins, 2, 4, ls='--', color='gray', lw=insetlw)
plt.subplot(tr_spec[1, :])
plt.plot(
(IA_inactivation['traces'][:, 2, :].T - 2606.2),
IA_inactivation['traces'][:, 1, :].T,
'-', color=colors['input'], lw=0.5, alpha=bg_tr_alpha
)
plt.xlim(-50, 1550)
pltools.add_scalebar(x_units='ms', x_size=200, omit_y=True, anchor=(0.7, -0.05), x_label_space=0.05, linewidth=sbarlw)
plt.tight_layout()
if IMG_PATH is not None:
plt.savefig(os.path.join(IMG_PATH, 'GABA_kinetics_trace_only.png'))
plt.savefig(os.path.join(IMG_PATH, 'GABA_kinetics_trace_only.svg'))
plt.show()
# -
| figs/scripts/gaba_currents.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from __future__ import print_function
import matplotlib.pyplot as plt
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
import os, sys, zipfile
import urllib.request
import shutil
import numpy as np
import skimage.io as io
import pylab
pylab.rcParams['figure.figsize'] = (10.0, 8.0)
# Record package versions for reproducibility
print("os: %s" % os.name)
print("sys: %s" % sys.version)
print("numpy: %s, %s" % (np.__version__, np.__file__))
annType = ['segm','bbox','keypoints']
annType = annType[1] #specify type here
prefix = 'person_keypoints' if annType=='keypoints' else 'instances'
print('Running demo for *%s* results.'%(annType))
# Setup data paths
dataDir = '../..'
dataType = 'val2014'
annDir = '{}/annotations'.format(dataDir)
annZipFile = '{}/annotations_train{}.zip'.format(dataDir, dataType)
annFile = '{}/instances_{}.json'.format(annDir, dataType)
annURL = 'http://images.cocodataset.org/annotations/annotations_train{}.zip'.format(dataType)
print (annDir)
print (annFile)
print (annZipFile)
print (annURL)
# Download data if not available locally
if not os.path.exists(annDir):
os.makedirs(annDir)
if not os.path.exists(annFile):
if not os.path.exists(annZipFile):
print ("Downloading zipped annotations to " + annZipFile + " ...")
with urllib.request.urlopen(annURL) as resp, open(annZipFile, 'wb') as out:
shutil.copyfileobj(resp, out)
print ("... done downloading.")
print ("Unzipping " + annZipFile)
with zipfile.ZipFile(annZipFile,"r") as zip_ref:
zip_ref.extractall(dataDir)
print ("... done unzipping")
print ("Will use annotations in " + annFile)
#initialize COCO ground truth api
cocoGt=COCO(annFile)
#initialize COCO detections api
resFile='%s/results/%s_%s_fake%s100_results.json'
resFile = resFile%(dataDir, prefix, dataType, annType)
cocoDt=cocoGt.loadRes(resFile)
imgIds=sorted(cocoGt.getImgIds())
imgIds=imgIds[0:100]
imgId = imgIds[np.random.randint(100)]
# running evaluation
cocoEval = COCOeval(cocoGt,cocoDt,annType)
cocoEval.params.imgIds = imgIds
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
| cocoapi/PythonAPI/demos/pycocoEvalDemo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://pymt.readthedocs.io"><img style="float: right" src="images/pymt-logo-header-text.png"></a>
#
# # Loading PyMT models
#
# In this tutorial we will learn how to use *PyMT* to:
# * Find and load models into a PyMT environment
# * Look at a model's metadata
# * Set up a model simulation
#
# We will also learn about the common interface for *PyMT* models.
#
# For more information you can look at the [pymt documentation](https://pymt.readthedocs.io)
# ## The *PyMT* model library
#
# All of the models that are available through *pymt* are held in a variable called, `MODELS`.
#
# To have a look at what models are currently available, we'll import the model collection
# and print the names of all of the models.
from pymt import MODELS
# You can think of `MODELS` as a Python dictionary where the keys are the names of the models and the values the model classes. For example, to see the names of all of the available models, you can use the `keys` method, just as you would with a `dict`.
MODELS.keys()
# The class associated with each model can be accessed either as an attribute of `MODELS` or, in the usual way, using square brackets. That is, the following two methods are equivalent.
MODELS.Child is MODELS["Child"]
# As with other Python objects, you can access a model's docstring with the `help` command.
help(MODELS.Child)
# + [markdown] solution2="hidden" solution2_first=true
# ## Exercise: Load a model and have a look at its documention
#
# Load a couple other models and have a look at their documentation. You should notice some similarities in how the models are presented.
# +
# Your code here
# + solution2="hidden"
help(MODELS.Cem)
# + solution2="hidden"
help(MODELS.FrostNumber)
# -
# ## Model documentation
#
# As you will hopefull have seen, each model comes with:
# * A brief description of what it does
# * A list of the model authors
# * Other model metadata such as,
# * version number
# * Open Source License
# * DOI
# * Link to additional model documentation
# * A list of, *bibtex* formatted, references to cite
# * The usual docstring sections with method descriptions, parameters, etc.
# Much of the above metadata is also available programmatically through attributes of a model instance. First we create an instance of the model.
#
# (for those who might not be familiar with Python conventions, *camelcase* indicates a *class*, lowercase an *instance*, and a *constant*)
child = MODELS.Child()
# Now try to access metadata as attributes. Hopefully, you'll be able to guess the attribute names.
print(child.version)
for ref in child.cite_as:
print(ref)
# # Set up a model simulation
#
# Model simulations can be either setup by hand or using the `setup` method of a *PyMT* model. If using `setup`, *PyMT* will take one of more model template input files and fill in values provided as keyword arguments to `setup`.
#
# You can see what the valid keywords for model's `setup` method is under the *Parameters* section in a models docstring. Each parameter has a description, data type, default value and units.
help(child)
# You can obtain a description of the default parameters programmatically through the *defaults* attribute.
for param in child.defaults:
print(param)
# By default, `setup` will create a set of input files in a temporary folder but you can change this behavior by providing the name of a another folder to put them into. To demonstrate what's going on, let's create a new folder in the current directory and see what happens.
child.setup("child-sim-0")
# `setup` returns a tuple of the main configuration file and the simulation folder. Within that folder will be one or more input files specific to the model you're using.
# ls child-sim-0
# cat child-sim-0/child.in
# + [markdown] solution2="hidden" solution2_first=true
# # Exercise: Setup a simulation using non-default values
#
# Use a model's `setup` method to create a set of input files using non-default values and verify the input files were created correctly.
# +
# Your code here
# + solution2="hidden"
from pymt import MODELS
hydrotrend = MODELS.Hydrotrend()
dict(hydrotrend.parameters)
# + solution2="hidden"
hydrotrend.setup("hydrotrend-sim-0", hydraulic_conductivity=200.0)
# + solution2="hidden"
dict(hydrotrend.parameters)["hydraulic_conductivity"]
# + solution2="hidden"
# ls hydrotrend-sim-0
# + solution2="hidden"
# cat hydrotrend-sim-0/HYDRO.IN
# + [markdown] solution2="hidden" solution2_first=true
# # Exercise: As a bonus, set up a series of simulations
#
# Pull parameters from a uniform distribution, which could be part of a sensitivity study.
# +
# Your code here
# + solution2="hidden"
import numpy as np
hydrotrend = MODELS.Hydrotrend()
values = np.linspace(50., 200.)
for sim_no, value in enumerate(values):
hydrotrend.setup(f"hydrotrend-sim-{sim_no}", hydraulic_conductivity=value)
# + solution2="hidden"
# cat hydrotrend-sim-12/HYDRO.IN | grep "hydraulic conductivity"
# + solution2="hidden"
print(values[12])
# + solution2="hidden"
# cat hydrotrend-sim-24/HYDRO.IN | grep "hydraulic conductivity"
# + solution2="hidden"
print(values[24])
| lessons/pymt/ans/01a_model_setup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqdm
from advopt import tasks
from advopt.classifier import *
from advopt.target.utils import combine
from advopt.target import logloss
import torch
# +
task = tasks.XOR()
assert len(task.search_space()) == 1, 'This example only works for 1D search space.'
params = np.linspace(task.search_space()[0][0], task.search_space()[0][1], num=21)
clf_type = 'NN'
device='cuda:0'
size = 32 * 1024
# +
clfs = dict()
n_control = 4
n_units = 128
grad_penalty = 1e-2
clfs['JSD'] = Network(
Dense(2, hidden_units=128, dropout=False),
device=device, min_stagnation=1024,
regularization=None, capacity=lincapacity(3),
const_regularization=r1_reg(1),
stop_diff=2e-2
)
for i in range(n_control):
clfs['JSD_%d' % (i + 1, )] = Network(
Dense(2, hidden_units=128, dropout=True),
device=device, min_stagnation=1024,
regularization=None, capacity=constcapacity((i + 1) / (n_control + 1), device=device),
const_regularization=r1_reg(1),
stop_diff=2e-2
)
clfs['JSD-dropout'] = Network(
Dense(2, hidden_units=128, dropout=True),
device=device, min_stagnation=1024,
regularization=None, capacity=lincapacity(3),
const_regularization=r1_reg(1),
stop_diff=2e-2
)
clfs['JSD-l2'] = Network(
Dense(2, hidden_units=128, dropout=False),
device=device, min_stagnation=1024,
regularization=l2_reg(1e-2), capacity=logcapacity(10),
const_regularization=r1_reg(1),
stop_diff=2e-2
)
# +
### to obtain smooth curves we reuse the same data
### the same effect can be achieved by averaging curves over multiple runs.
data_pos = task.ground_truth_generator()(size)
data_pos_test = task.ground_truth_generator()(size)
if task.is_synthetic():
data_neg_0 = task.ground_truth_generator()(size)
data_neg_test_0 = task.ground_truth_generator()(size)
else:
data_neg_0 = None
data_neg_test_0 = None
# -
try:
d_pos = task.ground_truth_generator()(size)
d_neg, _ = task.generator(task.example_parameters())(size)
plt.figure(figsize=(4, 4))
plt.scatter(d_neg[:512, 0], d_neg[:512, 1], label='generator', s=10)
plt.scatter(d_pos[:512, 0], d_pos[:512, 1], label='ground-truth', s=10)
plt.legend(loc='upper left', fontsize=14)
plt.savefig('%s-example.pdf' % (task.name(), ))
except:
import traceback as tb
tb.print_exc()
# +
divergences = {
name : np.zeros(shape=(params.shape[0], ))
for name in clfs
}
for i, param in enumerate(tqdm(params)):
if task.is_synthetic():
data_neg = task.transform(data_neg_0, [param])
data_neg_test = task.transform(data_neg_test_0, [param])
else:
data_neg = task.generator([param])(size)
data_neg_test = task.generator([param])(size)
data, labels = combine(data_pos, data_neg)
data_test, labels_test = combine(data_pos_test, data_neg_test)
for name in clfs:
clf = clfs[name]
X_pos = torch.tensor(data[labels > 0.5, :], device=device, requires_grad=False, dtype=torch.float32)
X_neg = torch.tensor(data[labels < 0.5, :], device=device, requires_grad=False, dtype=torch.float32)
clf.fixed_fit(X_pos, X_neg, n_iterations=4 * 1024)
proba = clf.predict_proba(data_test)
divergences[name][i] = np.log(2) - np.mean(
logloss(labels_test, proba[:, 1])
)
# -
import pickle
with open('pJSD-%s-%s.pickled' % (task.name(), clf_type), 'wb') as f:
pickle.dump(divergences, f)
import pickle
with open('pJSD-%s-%s.pickled' % (task.name(), clf_type), 'rb') as f:
divergences = pickle.load(f)
# +
plt.figure(figsize=(8, 8))
for i in range(4):
plt.plot(params, divergences['JSD_%d' % (i + 1, )], '--', color='black')
plt.plot(params, divergences['JSD'], '-', color='black', label='JSD')
plt.plot(params, divergences['JSD-l2'], lw=2, label='linear pJSD', color=plt.cm.tab10(0))
plt.plot(params, divergences['JSD-dropout'], lw=2, label='logarithmic pJSD', color=plt.cm.tab10(1))
plt.legend(loc='upper left', fontsize=14, framealpha=0.95)
plt.xlabel(task.parameters_names()[0], fontsize=14)
plt.ylabel('divergence', fontsize=14)
plt.savefig('%s-%s.pdf' % (task.name(), clf_type))
# -
| experiments/AD-XOR-NN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from copy import copy
from functools import partial
import itertools
import json
from pathlib import Path
import re
import sys
sys.path.append("../src")
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.stats as st
import statsmodels.formula.api as smf
from tqdm import tqdm, tqdm_notebook
# %matplotlib inline
sns.set(style="whitegrid", context="paper", font_scale=3.5, rc={"lines.linewidth": 2.5})
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png')
#set_matplotlib_formats('svg')
# %load_ext autoreload
# %autoreload 2
import util
# -
# ## Data preparation
output_path = Path("../output")
decoder_path = output_path / "decoders"
bert_encoding_path = output_path / "encodings"
model_path = output_path / "bert"
checkpoints = [util.get_encoding_ckpt_id(dir_entry) for dir_entry in bert_encoding_path.iterdir()]
# +
models = [model for model, _, _ in checkpoints]
baseline_model = "baseline"
if baseline_model not in models:
raise ValueError("Missing baseline model. This is necessary to compute performance deltas in the analysis of fine-tuning models. Stop.")
standard_models = [model for model in models if not model.startswith("LM_") and not model == baseline_model]
custom_models = [model for model in models if model.startswith("LM_") and not model == baseline_model]
runs = sorted(set(run for _, run, _ in checkpoints))
checkpoint_steps = sorted(set(step for _, _, step in checkpoints))
# Models which should appear in the final report figures
report_models = ["SQuAD", "QQP", "MNLI", "SST", "LM", "LM_scrambled", "LM_scrambled_para", "LM_pos", "glove"]
# Model subsets to render in different report figures
report_model_sets = [
("all", set(report_models)),
("standard", set(report_models) & set(standard_models)),
("custom", set(report_models) & set(custom_models)),
]
report_model_sets = [(name, model_set) for name, model_set in report_model_sets
if len(model_set) > 0]
# +
RENDER_FINAL = True
figure_path = Path("../reports/figures")
figure_path.mkdir(exist_ok=True, parents=True)
report_hues = dict(zip(sorted(report_models), sns.color_palette()))
# -
# ### Decoder performance metrics
# Load decoder performance data.
decoding_perfs = util.load_decoding_perfs(decoder_path)
# Save perf data.
decoding_perfs.to_csv(output_path / "decoder_perfs.csv")
# +
# # Load comparison model data.
# for other_model in other_models:
# other_perf_paths = list(Path("../models/decoders").glob("encodings.%s-*.csv" % other_model))
# for other_perf_path in tqdm_notebook(other_perf_paths, desc=other_model):
# subject, = re.findall(r"-([\w\d]+)\.csv$", other_perf_path.name)
# perf = pd.read_csv(other_perf_path,
# usecols=["mse", "r2", "rank_median", "rank_mean", "rank_min", "rank_max"])
# decoding_perfs.loc[other_model, 1, 250, subject] = perf.iloc[0]
# -
# ### Model performance metrics
# +
# For each model, load checkpoint data: global step, gradient norm information
model_metadata = {}
for model, run, step in tqdm_notebook(checkpoints):
run_dir = model_path / ("%s-%i" % (model, run))
# Fetch corresponding fine-tuning metadata.
ckpt_path = run_dir / ("model.ckpt-step%i" % step)
try:
metadata = util.load_bert_finetune_metadata(run_dir, step)
except Exception as e:
pass
else:
if metadata["steps"]:
model_metadata[model, run] = pd.DataFrame.from_dict(metadata["steps"], orient="index")
# SQuAD eval results need to be loaded separately, since they run offline.
if model == "SQuAD":
pred_dir = output_path / "eval_squad" / ("SQuAD-%i-%i" % (run, step))
try:
with (pred_dir / "results.json").open("r") as results_f:
results = json.load(results_f)
model_metadata[model, run].loc[step]["eval_accuracy"] = results["best_f1"] / 100.
except:
print("Failed to retrieve eval data for SQuAD-%i-%i" % (run, step))
model_metadata = pd.concat(model_metadata, names=["model", "run", "step"], sort=True)
# -
# ### Putting it all together
# Join decoding data, post-hoc rank evaluation data, and model training metadata into a single df.
old_index = decoding_perfs.index
df = decoding_perfs.reset_index().join(model_metadata, on=["model", "run", "step"]).set_index(old_index.names)
df.head()
# -----------
all_subjects = df.index.get_level_values("subject").unique()
all_subjects
# +
try:
subjects_with_baseline = set(decoding_perfs.loc[baseline_model, :, :].index.get_level_values("subject"))
except:
subjects_with_baseline = set()
if not subjects_with_baseline == set(all_subjects):
raise ValueError("Cannot proceed. Missing base decoder evaluation for subjects: " + str(set(all_subjects) - subjects_with_baseline))
# -
# ### Synthetic columns
df["eval_accuracy_delta"] = df.groupby(["model", "run"]).eval_accuracy.transform(lambda xs: xs - xs.iloc[0])
df["eval_accuracy_norm"] = df.groupby(["model", "run"]).eval_accuracy.transform(lambda accs: (accs - accs.min()) / (accs.max() - accs.min()))
# +
def decoding_perf_delta(xs, metric="mse"):
subject = xs.index[0][3]
base_metric = df.loc[baseline_model, 1, 0, subject][metric]
return xs - base_metric.item()
df["decoding_mse_delta"] = df.groupby(["model", "run", "subject"]).mse.transform(partial(decoding_perf_delta, metric="mse"))
df["rank_mean_delta"] = df.groupby(["model", "run", "subject"]).rank_mean.transform(partial(decoding_perf_delta, metric="rank_mean"))
df["rank_median_delta"] = df.groupby(["model", "run", "subject"]).rank_median.transform(partial(decoding_perf_delta, metric="rank_median"))
# -
NUM_BINS = 50
def bin(xs):
if xs.isnull().values.any(): return np.nan
return pd.cut(xs, np.linspace(xs.min(), xs.max() + 1e-5, NUM_BINS), labels=False)
df["eval_accuracy_bin"] = df.groupby(["model"]).eval_accuracy.transform(bin)
df["decoding_mse_bin"] = df.groupby(["subject"]).decoding_mse_delta.transform(bin)
df["total_global_norms_bin"] = df.groupby(["model"]).total_global_norms.transform(bin)
ROLLING_WINDOW_SIZE = 5
grouped = df.groupby(["model", "run", "subject"])
for col in ["mse", "decoding_mse_delta", "eval_accuracy", "train_loss", "rank_mean", "rank_mean_delta"]:
df["%s_rolling" % col] = grouped[col].transform(lambda rows: rows.rolling(ROLLING_WINDOW_SIZE, min_periods=1).mean())
df.tail()
df.head()
dfi = df.reset_index()
# ## Model training analysis
#
# Let's verify that each model is not overfitting; if it is overfitting, restrict our analysis to just the region before overfitting begins.
# +
# g = sns.FacetGrid(df.reset_index().melt(id_vars=["model", "run", "step"],
# value_vars=["train_loss_rolling", "eval_accuracy_rolling"]),
# row="variable", col="model", sharex=True, sharey=False, height=4)
# g.map(sns.lineplot, "step", "value", "run", ci=None)
# g.add_legend()
# +
# %matplotlib agg
if RENDER_FINAL:
# models which appear on left edge of subfigs in paper
LEFT_EDGE_MODELS = ["QQP", "LM"]
training_fig_path = figure_path / "training"
training_fig_path.mkdir(exist_ok=True)
shared_kwargs = {"legend": False, "ci": None}
for model in tqdm_notebook(report_models):
f, (loss_fig, acc_fig) = plt.subplots(2, 1, figsize=(10,15), sharex=True)
try:
local_data = df.loc[model].reset_index()
except KeyError:
print(f"Missing training data for {model}")
continue
ax = sns.lineplot(data=local_data, x="step", y="train_loss_rolling", hue="run", ax=loss_fig, **shared_kwargs)
ax.set_ylabel("Training loss\n(rolling window)" if model in LEFT_EDGE_MODELS else "")
ax.set_xlabel("Training step")
ax = sns.lineplot(data=local_data, x="step", y="eval_accuracy_rolling", hue="run", ax=acc_fig, **shared_kwargs)
ax.set_ylabel("Validation set accuracy\n(rolling window)" if model in LEFT_EDGE_MODELS else "")
ax.set_xlabel("Training step")
sns.despine()
plt.tight_layout()
plt.savefig(training_fig_path / ("%s.pdf" % model))
plt.close()
# %matplotlib inline
# -
# ## Decoding analyses
MSE_DELTA_LABEL = "$\Delta$(MSE)"
MAR_DELTA_LABEL = "$\Delta$(MAR)"
# ### Final state analysis
# +
# %matplotlib agg
if RENDER_FINAL:
final_state_fig_path = figure_path / "final_state"
final_state_fig_path.mkdir(exist_ok=True)
metrics = [("decoding_mse_delta", MSE_DELTA_LABEL, None, None),
("rank_mean_delta", MAR_DELTA_LABEL, None, None),
("mse", "Mean squared error", 0.00335, 0.00385),
("rank_mean", "Mean average rank", 20, 95)]
for model_set_name, model_set in report_model_sets:
final_df = dfi[(dfi.step == checkpoint_steps[-1]) & (dfi.model.isin(model_set))]
if final_df.empty:
continue
for metric, label, ymin, ymax in tqdm_notebook(metrics, desc=model_set_name):
fig, ax = plt.subplots(figsize=(15, 10))
# Plot BERT baseline performance.
if "delta" not in metric:
# TODO error region instead -- plt.fill_between
ax.axhline(dfi[dfi.model == baseline_model][metric].mean(),
linestyle="--", color="gray")
sns.barplot(data=final_df, x="model", y=metric,
order=final_df.groupby("model")[metric].mean().sort_values().index,
palette=report_hues, ax=ax)
padding = final_df[metric].var() * 0.005
plt.ylim((ymin or (final_df[metric].min() - padding), ymax or (final_df[metric].max() + padding)))
plt.xlabel("Model")
plt.ylabel(label)
plt.xticks(rotation=45, ha="right")
plt.tight_layout()
plt.savefig(final_state_fig_path / (f"{metric}.{model_set_name}.pdf"))
#plt.close(fig)
# %matplotlib inline
# +
# %matplotlib agg
if RENDER_FINAL:
final_state_fig_path = figure_path / "final_state_within_subject"
final_state_fig_path.mkdir(exist_ok=True)
metrics = [("decoding_mse_delta", MSE_DELTA_LABEL),
("rank_mean_delta", MAR_DELTA_LABEL),
("mse", "Mean squared error"),
("rank_mean", "Mean average rank")]
for model_set_name, model_set in report_model_sets:
final_df = dfi[(dfi.step == checkpoint_steps[-1]) & (dfi.model.isin(model_set))]
for metric, label in tqdm_notebook(metrics, desc=model_set_name):
fig = plt.figure(figsize=(25, 10))
sns.barplot(data=final_df, x="model", y=metric, hue="subject",
order=final_df.groupby("model")[metric].mean().sort_values().index)
plt.ylabel(label)
plt.xticks(rotation=30, ha="right")
plt.legend(loc="center left", bbox_to_anchor=(1,0.5))
plt.tight_layout()
plt.savefig(final_state_fig_path / f"{metric}.{model_set_name}.pdf")
plt.close(fig)
# %matplotlib inline
# +
# %matplotlib agg
if RENDER_FINAL:
final_state_fig_path = figure_path / "final_state_within_model"
final_state_fig_path.mkdir(exist_ok=True)
metrics = [("decoding_mse_delta", MSE_DELTA_LABEL, None, None),
("rank_mean_delta", MAR_DELTA_LABEL, None, None),
("mse", "Mean squared error", None, None),
("rank_mean", "Mean average rank", None, None)]
subj_order = dfi[(dfi.step == checkpoint_steps[-1]) & (dfi.model.isin(report_model_sets[0][1]))] \
.groupby("subject")[metrics[0][0]].mean().sort_values().index
for model_set_name, model_set in report_model_sets:
final_df = dfi[(dfi.step == checkpoint_steps[-1]) & (dfi.model.isin(model_set))]
for metric, label, ymin, ymax in tqdm_notebook(metrics, desc=model_set_name):
fig = plt.figure(figsize=(25, 10))
sns.barplot(data=final_df, x="subject", y=metric, hue="model",
order=subj_order)
padding = final_df[metric].var() * 0.005
plt.ylim((ymin or (final_df[metric].min() - padding), ymax or (final_df[metric].max() + padding)))
plt.xlabel("Subject")
plt.ylabel(label)
plt.legend(loc="center left", bbox_to_anchor=(1,0.5))
plt.tight_layout()
plt.savefig(final_state_fig_path / f"{metric}.{model_set_name}.pdf")
plt.close(fig)
# %matplotlib inline
# -
# ### Step analysis
# + slideshow={"slide_type": "-"}
# g = sns.FacetGrid(dfi, col="run", size=6)
# g.map(sns.lineplot, "step", "decoding_mse_delta", "model").add_legend()
# plt.xlabel("Fine-tuning step")
# plt.ylabel(MSE_DELTA_LABEL)
# +
# g = sns.FacetGrid(dfi, col="run", size=6)
# g.map(sns.lineplot, "step", "rank_mean_delta", "model").add_legend()
# plt.xlabel("Fine-tuning step")
# plt.ylabel(MAR_DELTA_LABEL)
# +
f, ax = plt.subplots(figsize=(15, 10))
sns.lineplot(data=dfi, x="step", y="decoding_mse_delta_rolling", hue="model", ax=ax)
plt.xlabel("Fine-tuning step")
plt.ylabel(MSE_DELTA_LABEL)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
# +
f, ax = plt.subplots(figsize=(15, 10))
sns.lineplot(data=dfi, x="step", y="rank_mean_delta_rolling", hue="model", ax=ax)
plt.xlabel("Fine-tuning step")
plt.ylabel(MAR_DELTA_LABEL)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
# +
# %matplotlib agg
if RENDER_FINAL:
trajectory_fig_dir = figure_path / "trajectories"
trajectory_fig_dir.mkdir(exist_ok=True)
metrics = [("decoding_mse_delta", MSE_DELTA_LABEL),
("rank_mean_delta", MAR_DELTA_LABEL),
("decoding_mse_delta_rolling", MSE_DELTA_LABEL),
("rank_mean_delta_rolling", MAR_DELTA_LABEL)]
for model_set_name, model_set in report_model_sets:
for metric, label in tqdm_notebook(metrics, desc=model_set_name):
fig = plt.figure(figsize=(18, 10))
sns.lineplot(data=dfi[dfi.model.isin(model_set)],
x="step", y=metric, hue="model", palette=report_hues)
plt.xlim((0, checkpoint_steps[-1]))
plt.xlabel("Fine-tuning step")
plt.ylabel(label)
plt.legend(loc="center left", bbox_to_anchor=(1, 0.5))
plt.tight_layout()
plt.savefig(trajectory_fig_dir / f"{metric}.{model_set_name}.pdf")
plt.close(fig)
# %matplotlib inline
# +
# g = sns.FacetGrid(dfi[dfi.model != baseline_model], col="model", row="run", size=6)
# g.map(sns.lineplot, "step", "decoding_mse_delta", "subject", ci=None).add_legend()
# +
# g = sns.FacetGrid(dfi, col="model", row="run", size=6)
# g.map(sns.lineplot, "step", "rank_median_delta", "subject", ci=None).add_legend()
# -
# ### Gradient norm analysis
# +
# f, ax = plt.subplots(figsize=(10, 8))
# sns.lineplot(data=dfi, y="decoding_mse_delta", x="total_global_norms_bin", hue="model", ax=ax)
# ax.set_title("Decoding performance delta vs. binned total global gradient norm")
# ax.set_xlabel("Cumulative global gradient norm bin")
# ax.set_ylabel(MSE_DELTA_LABEL)
# +
#g = sns.FacetGrid(dfi, col="model", row="run", size=6, sharex=False, sharey=True)
#g.map(sns.lineplot, "total_global_norms", "decoding_mse_delta", "subject", ci=None).add_legend()
# -
# ### Eval accuracy analysis
# +
#g = sns.FacetGrid(dfi, col="model", row="run", sharex=False, sharey=True, size=7)
#g.map(sns.lineplot, "eval_accuracy", "decoding_mse_delta", "subject", ci=None).add_legend()
# -
# ## Per-subject analysis
f, ax = plt.subplots(figsize=(14, 9))
dff = pd.DataFrame(dfi[dfi.step == checkpoint_steps[-1]].groupby(["model", "run"]).apply(lambda xs: xs.groupby("subject").decoding_mse_delta.mean()).stack()).reset_index()
sns.barplot(data=dff, x="model", hue="subject", y=0, ax=ax)
plt.title("subject final decoding mse delta, averaging across runs")
f, ax = plt.subplots(figsize=(14, 9))
dff = pd.DataFrame(dfi[dfi.step == checkpoint_steps[-1]].groupby(["model", "run"]).apply(lambda xs: xs.groupby("subject").rank_mean_delta.mean()).stack()).reset_index()
sns.barplot(data=dff, x="model", hue="subject", y=0, ax=ax)
plt.title("subject final rank mean delta, averaging across runs")
f, ax = plt.subplots(figsize=(14, 9))
dff = pd.DataFrame(dfi.groupby(["model", "run"]).apply(lambda xs: xs.groupby("subject").decoding_mse_delta.max()).stack()).reset_index()
sns.violinplot(data=dff, x="subject", y=0)
sns.stripplot(data=dff, x="subject", y=0, edgecolor="white", linewidth=1, alpha=0.7, ax=ax)
plt.title("subject max decoding mse delta, averaging across models and runs")
f, ax = plt.subplots(figsize=(14, 9))
dff = pd.DataFrame(dfi.groupby(["model", "run"]).apply(lambda xs: xs.groupby("subject").decoding_mse_delta.min()).stack()).reset_index()
sns.violinplot(data=dff, x="subject", y=0)
sns.stripplot(data=dff, x="subject", y=0, edgecolor="white", linewidth=1, alpha=0.7, ax=ax)
plt.title("subject min decoding mse delta, averaging across models and runs")
# ## Statistical analyses
#
# First, some data prep for comparing final vs. start states:
perf_comp = df.query("step == %i" % checkpoint_steps[-1]).reset_index(level="step", drop=True).sort_index()
# Join data from baseline
perf_comp = perf_comp.join(df.loc[baseline_model, 1, 0].rename(columns=lambda c: "start_%s" % c))
if "glove" in perf_comp.index.levels[0]:
perf_comp = perf_comp.join(df.loc["glove", 1, 250].rename(columns=lambda c: "glove_%s" % c))
perf_comp.head()
(perf_comp.mse - perf_comp.start_mse).plot.hist()
perf_compi = perf_comp.reset_index()
# Quantitative tests:
#
# 1. for any GLUE task g, MSE(g after 250) > MSE(LM)
# 2. for any LM_scrambled_para task t, MSE(t after 250) < MSE(LM)
# 3. for any GLUE task g, MAR(g after 250) > MAR(LM)
# 4. for any LM_scrambled_para task t, MAR(t after 250) < MAR(LM)
# 5. MSE(LM after 250) =~ MSE(LM)
# 6. MAR(LM after 250) =~ MSE(LM)
# 7. for any LM_scrambled_para task t, MSE(t after 250) < MSE(glove)
# 8. for any LM_scrambled_para task t, MAR(t after 250) < MAR(glove)
# 9. for any LM_pos task t, MSE(t after 250) > MSE(LM)
# 10. for any LM_pos task t, MAR(t after 250) > MAR(LM)
# ### test 1
sample = perf_compi[~perf_compi.model.str.startswith((baseline_model, "LM", "glove"))]
sample.mse.hist()
sample.start_mse.hist()
st.ttest_rel(sample.mse, sample.start_mse)
# ### test 1 (split across models)
# +
results = []
for model in standard_models:
if model in ["LM", "glove"]: continue
sample = perf_compi[perf_compi.model == model]
results.append((model,) + st.ttest_rel(sample.mse, sample.start_mse))
pd.DataFrame(results, columns=["model", "tval", "pval"])
# -
# ### test 2
sample = perf_compi[perf_compi.model == "LM_scrambled_para"]
sample.mse.hist()
sample.start_mse.hist()
st.ttest_rel(sample.mse, sample.start_mse)
# ### test 3
sample = perf_compi[~perf_compi.model.str.startswith((baseline_model, "LM", "glove"))]
sample.rank_mean.hist()
sample.start_rank_mean.hist()
st.ttest_rel(sample.rank_mean, sample.start_rank_mean)
# ### test 3 (split across models)
# +
results = []
for model in standard_models:
if model in ["LM", "glove"]: continue
sample = perf_compi[perf_compi.model == model]
results.append((model,) + st.ttest_rel(sample.rank_mean, sample.start_rank_mean))
pd.DataFrame(results, columns=["model", "tval", "pval"])
# -
# ### test 4
sample = perf_compi[perf_compi.model == "LM_scrambled_para"]
sample.rank_mean.hist()
sample.start_rank_mean.hist()
st.ttest_rel(sample.rank_mean, sample.start_rank_mean)
# ### test 5
sample = perf_compi[perf_compi.model == "LM"]
sample.mse.hist()
sample.start_mse.hist()
st.ttest_rel(sample.mse, sample.start_mse)
# ### test 6
sample = perf_compi[perf_compi.model == "LM"]
sample.rank_mean.hist()
sample.start_rank_mean.hist()
st.ttest_rel(sample.rank_mean, sample.start_rank_mean)
# ### test 7
sample = perf_compi[perf_compi.model == "LM_scrambled_para"]
sample.mse.hist()
sample.glove_mse.hist()
st.ttest_rel(sample.mse, sample.glove_mse)
# ### test 8
sample = perf_compi[perf_compi.model == "LM_scrambled_para"]
sample.rank_mean.hist()
sample.glove_rank_mean.hist()
st.ttest_rel(sample.rank_mean, sample.glove_rank_mean)
# ### test 9
sample = perf_compi[perf_compi.model == "LM_pos"]
sample.mse.hist()
sample.start_mse.hist()
st.ttest_rel(sample.mse, sample.start_mse)
f = plt.figure(figsize=(20,20))
sns.barplot(data=pd.melt(sample, id_vars=["subject"], value_vars=["mse", "start_mse"]),
x="subject", y="value", hue="variable")
plt.ylim((0.0033, 0.0038))
# ### test 10
sample = perf_compi[perf_compi.model == "LM_pos"]
sample.rank_mean.hist()
sample.start_rank_mean.hist()
st.ttest_rel(sample.rank_mean, sample.start_rank_mean)
| notebooks/quantitative_dynamic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Comparison SQL - Pandas - Spark
import sys
import os
# SPARK_HOME='/usr/local/Cellar/apache-spark/2.4.4'
# os.environ['SPARK_HOME'] = SPARK_HOME
# if SPARK_HOME not in sys.path:
# sys.path.insert(0, SPARK_HOME)
import findspark
import pyspark
import numpy as np
import pandas as pd
import pyspark.sql.functions as F
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.master('local[2]') \
.appName("SimpleApp") \
.getOrCreate()
# # Load Data
# ## Load Spark Data
# +
# for f in map(str.strip, os.popen('ls ../data/*.csv | grep -v 100_').readlines()):
# nam = os.path.basename(f).split('.')[0]
# print(f"""
# print("{nam}_pdf [{f}]")
# {nam}_df = spark.read.csv('{f}', inferSchema=True, header=True)
# {nam}_df.registerTempTable('{nam}')
# {nam}_df.printSchema()
# """)
# +
print("customers_pdf [../data/customers.csv]")
customers_df = spark.read.csv('../data/customers.csv', inferSchema=True, header=True)
customers_df.registerTempTable('customers')
customers_df.printSchema()
print("employees_pdf [../data/employees.csv]")
employees_df = spark.read.csv('../data/employees.csv', inferSchema=True, header=True)
employees_df.registerTempTable('employees')
employees_df.printSchema()
print("offices_pdf [../data/offices.csv]")
offices_df = spark.read.csv('../data/offices.csv', inferSchema=True, header=True)
offices_df.registerTempTable('offices')
offices_df.printSchema()
print("orderdetails_pdf [../data/orderdetails.csv]")
orderdetails_df = spark.read.csv('../data/orderdetails.csv', inferSchema=True, header=True)
orderdetails_df.registerTempTable('orderdetails')
orderdetails_df.printSchema()
print("orders_pdf [../data/orders.csv]")
orders_df = spark.read.csv('../data/orders.csv', inferSchema=True, header=True)
orders_df.registerTempTable('orders')
orders_df.printSchema()
print("payments_pdf [../data/payments.csv]")
payments_df = spark.read.csv('../data/payments.csv', inferSchema=True, header=True)
payments_df.registerTempTable('payments')
payments_df.printSchema()
print("productlines_pdf [../data/productlines.csv]")
productlines_df = spark.read.csv('../data/productlines.csv', inferSchema=True, header=True)
productlines_df.registerTempTable('productlines')
productlines_df.printSchema()
print("products_pdf [../data/products.csv]")
products_df = spark.read.csv('../data/products.csv', inferSchema=True, header=True)
products_df.registerTempTable('products')
products_df.printSchema()
# -
# ## Load Pandas Tables
# +
# for f in map(str.strip, os.popen('ls ../data/*.csv | grep -v 100_').readlines()):
# nam = os.path.basename(f).split('.')[0]
# print(f"""
# print("{nam}_pdf [{f}]")
# {nam}_pdf = pd.read_csv('{f}')
# display({nam}_pdf.dtypes)
# print()""")
# +
print("customers_pdf [../data/customers.csv]")
customers_pdf = pd.read_csv('../data/customers.csv')
display(customers_pdf.dtypes)
print()
print("employees_pdf [../data/employees.csv]")
employees_pdf = pd.read_csv('../data/employees.csv')
display(employees_pdf.dtypes)
print()
print("offices_pdf [../data/offices.csv]")
offices_pdf = pd.read_csv('../data/offices.csv')
display(offices_pdf.dtypes)
print()
print("orderdetails_pdf [../data/orderdetails.csv]")
orderdetails_pdf = pd.read_csv('../data/orderdetails.csv')
display(orderdetails_pdf.dtypes)
print()
print("orders_pdf [../data/orders.csv]")
orders_pdf = pd.read_csv('../data/orders.csv')
display(orders_pdf.dtypes)
print()
print("payments_pdf [../data/payments.csv]")
payments_pdf = pd.read_csv('../data/payments.csv')
display(payments_pdf.dtypes)
print()
print("productlines_pdf [../data/productlines.csv]")
productlines_pdf = pd.read_csv('../data/productlines.csv')
display(productlines_pdf.dtypes)
print()
print("products_pdf [../data/products.csv]")
products_pdf = pd.read_csv('../data/products.csv')
display(products_pdf.dtypes)
print()
# -
# # Pandas map() and apply()
#
# The methods `map()` and `apply()` can be used for row and column operations:
#
# **map(self, arg, na_action=None) -> 'Series'**
#
# Map values of Series according to input correspondence.
#
# Used for substituting each value in a Series with another value,
# that may be derived from a function, a ``dict`` or
# a :class:`Series`.
#
# Parameters
# ----------
# arg : function, collections.abc.Mapping subclass or Series
# Mapping correspondence.
# na_action : {None, 'ignore'}, default None
# If 'ignore', propagate NaN values, without passing them to the
# mapping correspondence.
#
# Returns
# -------
# Series
# Same index as caller.
#
#
# **apply(self, func, convert_dtype=True, args=(), **kwds)**
#
# Invoke function on values of Series.
#
# Can be ufunc (a NumPy function that applies to the entire Series)
# or a Python function that only works on single values.
#
# Parameters
# ----------
# func : function
# Python function or NumPy ufunc to apply.
# convert_dtype : bool, default True
# Try to find better dtype for elementwise function results. If
# False, leave as dtype=object.
# args : tuple
# Positional arguments passed to func after the series value.
# **kwds
# Additional keyword arguments passed to func.
#
# Returns
# -------
# Series or DataFrame
# If func returns a Series object the result will be a DataFrame.
orders_pdf.head()
# ## Example: convert columns
# +
import datetime
for col in filter(lambda s: s.endswith('Date'), orders_pdf.columns):
print(col)
orders_pdf[col] = orders_pdf[col].map(lambda s: datetime.datetime.strptime(s, '%Y-%m-%d'), na_action='ignore')
display(orders_pdf.dtypes)
# -
# help(pd.Series.apply)
# # Map-Reduce & Group By
# 
# %%writefile three_by_three.csv
text
<NAME>iver
Car Car River
Deer Car Bear
# ## Python package `functools`
# https://www.geeksforgeeks.org/reduce-in-python/
from functools import reduce
# +
pdf = pd.read_csv('three_by_three.csv')
pdf2 = pd.DataFrame({'word': reduce(lambda a, b: a + b, pdf.text.str.split())})
pdf2['n'] = 1
# display(pdf2)
pdf2.groupby('word').agg({'n': 'sum'})
# -
# ## Spark
df = spark.read.csv('three_by_three.csv', header=True)
df.printSchema()
df.select(F.explode(F.split('text', r'\s')).alias('word')) \
.groupBy('word').agg(F.count('word').alias('n')) \
.show()
# ## SQL
df.registerTempTable('threebythree')
q = """
SELECT word, count(1) AS n
FROM (
SELECT explode(split(text, ' ')) AS word
FROM threebythree
)
GROUP BY word
"""
spark.sql(q).show()
# # Joining Tables
#
# Query classic and vintage car sales that were sold by US sales reps?
# Employees with job title "Sales Rep" who are assigned to an office in the US, and are the sales rep for customers of vintage and classic cars.
#
# ## SQL
# +
q = """
WITH us_sales_reps AS (
SELECT employeeNumber, lastName, firstName, city, postalCode
FROM employees e
JOIN offices o
ON e.officeCode = o.officeCode
WHERE e.jobTitle = 'Sales Rep' AND o.country = 'USA'
),
car_orders AS (
SELECT prod.productCode, prod.buyPrice, prod.MSRP, det.priceEach, det.quantityOrdered, or.customerNumber, or.shippedDate
FROM orders or
JOIN orderdetails det
ON or.orderNumber = det.orderNumber
JOIN products prod
ON det.productCode = prod.productCode
WHERE prod.productLine in ('Classic Cars', 'Vintage Cars') and or.status = 'Shipped'
)
SELECT sr.*, car.*
FROM car_orders car
JOIN customers cst
ON car.customerNumber = cst.customerNumber
JOIN us_sales_reps sr
ON cst.salesRepEmployeeNumber = sr.employeeNumber
"""
print(spark.sql(q).count())
# spark.sql(q).limit(10).toPandas()
# -
# ## Pandas
# +
df1 = pd.merge(employees_pdf, offices_pdf, on='officeCode')
df2 = df1[ (df1.jobTitle == 'Sales Rep') & (df1.country == 'USA') ]
us_sales_reps_pdf = df2[['employeeNumber', 'lastName', 'firstName', 'city']]
df3 = pd.merge(
pd.merge(orders_pdf, orderdetails_pdf, on='orderNumber'),
products_pdf, on='productCode'
)
df4 = df3[ df3.productLine.isin(['Classic Cars', 'Vintage Cars']) & (df3.status == 'Shipped') ]
car_orders_pdf = df4[['productCode', 'buyPrice', 'MSRP', 'priceEach', 'quantityOrdered', 'customerNumber', 'shippedDate']]
df5 = pd.merge(
us_sales_reps_pdf,
pd.merge(car_orders_pdf, customers_pdf[['customerNumber', 'salesRepEmployeeNumber']], on='customerNumber'),
left_on='employeeNumber', right_on='salesRepEmployeeNumber'
)
us_car_orders_pdf = df5[['employeeNumber', 'lastName', 'firstName', 'city', 'productCode',
'productCode', 'buyPrice', 'MSRP', 'priceEach', 'quantityOrdered', 'customerNumber', 'shippedDate']]
print(us_car_orders_pdf.shape)
# -
# ## Spark
# +
us_sales_reps_df = employees_df \
.join(offices_df, 'officeCode') \
.where("""jobTitle = 'Sales Rep' AND country = 'USA'""") \
.select('employeeNumber', 'lastName', 'firstName', 'city')
car_orders_df = orderdetails_df \
.join(products_df, 'productCode') \
.where("productLine in ('Classic Cars', 'Vintage Cars')") \
.join(orders_df, 'orderNumber') \
.where("status = 'Shipped'") \
.select('productCode', 'buyPrice', 'MSRP', 'priceEach', 'quantityOrdered', 'customerNumber', 'shippedDate')
us_car_orders_df = customers_df.select('customerNumber', 'salesRepEmployeeNumber') \
.join(us_sales_reps_df, us_sales_reps_df.employeeNumber == customers_df.salesRepEmployeeNumber) \
.join(car_orders_df, 'customerNumber')
print(us_car_orders_df.count())
# -
| 4_Processing_Tabular_Data/SQL_Pandas_Spark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # lec11
with open ('jmu_news.txt','r') as jmu_news:
news_content = jmu_news.read()
print(news_content)
# +
from collections import Counter
count_result = Counter( ['a','a','b'] )
print(count_result.most_common)
# -
with open('jmu_news.txt','r') as jmu_news:
news_content = jmu_news.read()
word_list = news_content.split()
count_result = Counter(word_list)
for word, count in count_result.most_common(10):
print(word,count)
# +
num_list = [1,2,3,4]
new_list = [i+1 for i in num_list]
print(new_list)
# +
with open('jmu_news.txt','r' ) as jmu_news:
news_content = jmu_news.read()
word_list = news_content.split()
lowr_word_list = [ word.lower() for word in word_list ]
count_result = Counter(lowr_word_list)
for word, count in count_result.most_common(10):
print(word,count)
# -
print( ' {} salary is ${} ' .format('Tom',6000))
# +
import json
from pprint import pprint
# -
with open('demo.json','r') as json_file:
json_content = json.load(json_file)
pprint(json_content)
import urllib.request
# +
url = 'https://www.jmu.edu'
res = urllib.request.urlopen(url)
html_content = res.read()
print(html_content.decode('utf-8'))
# -
| lec11.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: myenv
# language: python
# name: myenv
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
import pandas as pd
import numpy as np
import os
import time
import regex as re
import math
# -
from underthesea import word_tokenize
import re
def remove_emojis(data):
emoj = re.compile("["
u"\U0001F600-\U0001F64F"
u"\U0001F300-\U0001F5FF"
u"\U0001F680-\U0001F6FF"
u"\U0001F1E0-\U0001F1FF"
u"\U00002500-\U00002BEF"
u"\U00002702-\U000027B0"
u"\U00002702-\U000027B0"
u"\U000024C2-\U0001F251"
u"\U0001f926-\U0001f937"
u"\U00010000-\U0010ffff"
u"\u2640-\u2642"
u"\u2600-\u2B55"
u"\u200d"
u"\u23cf"
u"\u23e9"
u"\u231a"
u"\ufe0f"
u"\u3030"
"]+", re.UNICODE)
return re.sub(emoj, '', data)
# +
import regex as re
uniChars = "àáảãạâầấẩẫậăằắẳẵặèéẻẽẹêềếểễệđìíỉĩịòóỏõọôồốổỗộơờớởỡợùúủũụưừứửữựỳýỷỹỵÀÁẢÃẠÂẦẤẨẪẬĂẰẮẲẴẶÈÉẺẼẸÊỀẾỂỄỆĐÌÍỈĨỊÒÓỎÕỌÔỒỐỔỖỘƠỜỚỞỠỢÙÚỦŨỤƯỪỨỬỮỰỲÝỶỸỴÂĂĐÔƠƯ"
unsignChars = "aaaaaaaaaaaaaaaaaeeeeeeeeeeediiiiiooooooooooooooooouuuuuuuuuuuyyyyyAAAAAAAAAAAAAAAAAEEEEEEEEEEEDIIIOOOOOOOOOOOOOOOOOOOUUUUUUUUUUUYYYYYAADOOU"
def loaddicchar():
dic = {}
char1252 = 'à|á|ả|ã|ạ|ầ|ấ|ẩ|ẫ|ậ|ằ|ắ|ẳ|ẵ|ặ|è|é|ẻ|ẽ|ẹ|ề|ế|ể|ễ|ệ|ì|í|ỉ|ĩ|ị|ò|ó|ỏ|õ|ọ|ồ|ố|ổ|ỗ|ộ|ờ|ớ|ở|ỡ|ợ|ù|ú|ủ|ũ|ụ|ừ|ứ|ử|ữ|ự|ỳ|ý|ỷ|ỹ|ỵ|À|Á|Ả|Ã|Ạ|Ầ|Ấ|Ẩ|Ẫ|Ậ|Ằ|Ắ|Ẳ|Ẵ|Ặ|È|É|Ẻ|Ẽ|Ẹ|Ề|Ế|Ể|Ễ|Ệ|Ì|Í|Ỉ|Ĩ|Ị|Ò|Ó|Ỏ|Õ|Ọ|Ồ|Ố|Ổ|Ỗ|Ộ|Ờ|Ớ|Ở|Ỡ|Ợ|Ù|Ú|Ủ|Ũ|Ụ|Ừ|Ứ|Ử|Ữ|Ự|Ỳ|Ý|Ỷ|Ỹ|Ỵ'.split(
'|')
charutf8 = "à|á|ả|ã|ạ|ầ|ấ|ẩ|ẫ|ậ|ằ|ắ|ẳ|ẵ|ặ|è|é|ẻ|ẽ|ẹ|ề|ế|ể|ễ|ệ|ì|í|ỉ|ĩ|ị|ò|ó|ỏ|õ|ọ|ồ|ố|ổ|ỗ|ộ|ờ|ớ|ở|ỡ|ợ|ù|ú|ủ|ũ|ụ|ừ|ứ|ử|ữ|ự|ỳ|ý|ỷ|ỹ|ỵ|À|Á|Ả|Ã|Ạ|Ầ|Ấ|Ẩ|Ẫ|Ậ|Ằ|Ắ|Ẳ|Ẵ|Ặ|È|É|Ẻ|Ẽ|Ẹ|Ề|Ế|Ể|Ễ|Ệ|Ì|Í|Ỉ|Ĩ|Ị|Ò|Ó|Ỏ|Õ|Ọ|Ồ|Ố|Ổ|Ỗ|Ộ|Ờ|Ớ|Ở|Ỡ|Ợ|Ù|Ú|Ủ|Ũ|Ụ|Ừ|Ứ|Ử|Ữ|Ự|Ỳ|Ý|Ỷ|Ỹ|Ỵ".split(
'|')
for i in range(len(char1252)):
dic[char1252[i]] = charutf8[i]
return dic
dicchar = loaddicchar()
# Đưa toàn bộ dữ liệu qua hàm này để chuẩn hóa lại
def covert_unicode(txt):
return re.sub(
r'à|á|ả|ã|ạ|ầ|ấ|ẩ|ẫ|ậ|ằ|ắ|ẳ|ẵ|ặ|è|é|ẻ|ẽ|ẹ|ề|ế|ể|ễ|ệ|ì|í|ỉ|ĩ|ị|ò|ó|ỏ|õ|ọ|ồ|ố|ổ|ỗ|ộ|ờ|ớ|ở|ỡ|ợ|ù|ú|ủ|ũ|ụ|ừ|ứ|ử|ữ|ự|ỳ|ý|ỷ|ỹ|ỵ|À|Á|Ả|Ã|Ạ|Ầ|Ấ|Ẩ|Ẫ|Ậ|Ằ|Ắ|Ẳ|Ẵ|Ặ|È|É|Ẻ|Ẽ|Ẹ|Ề|Ế|Ể|Ễ|Ệ|Ì|Í|Ỉ|Ĩ|Ị|Ò|Ó|Ỏ|Õ|Ọ|Ồ|Ố|Ổ|Ỗ|Ộ|Ờ|Ớ|Ở|Ỡ|Ợ|Ù|Ú|Ủ|Ũ|Ụ|Ừ|Ứ|Ử|Ữ|Ự|Ỳ|Ý|Ỷ|Ỹ|Ỵ',
lambda x: dicchar[x.group()], txt)
def remove_html(txt):
return re.sub(r'<[^>]*>', '', txt)
# +
uniChars = "àáảãạâầấẩẫậăằắẳẵặèéẻẽẹêềếểễệđìíỉĩịòóỏõọôồốổỗộơờớởỡợùúủũụưừứửữựỳýỷỹỵÀÁẢÃẠÂẦẤẨẪẬĂẰẮẲẴẶÈÉẺẼẸÊỀẾỂỄỆĐÌÍỈĨỊÒÓỎÕỌÔỒỐỔỖỘƠỜỚỞỠỢÙÚỦŨỤƯỪỨỬỮỰỲÝỶỸỴÂĂĐÔƠƯ"
unsignChars = "aaaaaaaaaaaaaaaaaeeeeeeeeeeediiiiiooooooooooooooooouuuuuuuuuuuyyyyyAAAAAAAAAAAAAAAAAEEEEEEEEEEEDIIIOOOOOOOOOOOOOOOOOOOUUUUUUUUUUUYYYYYAADOOU"
def loaddicchar():
dic = {}
char1252 = 'à|á|ả|ã|ạ|ầ|ấ|ẩ|ẫ|ậ|ằ|ắ|ẳ|ẵ|ặ|è|é|ẻ|ẽ|ẹ|ề|ế|ể|ễ|ệ|ì|í|ỉ|ĩ|ị|ò|ó|ỏ|õ|ọ|ồ|ố|ổ|ỗ|ộ|ờ|ớ|ở|ỡ|ợ|ù|ú|ủ|ũ|ụ|ừ|ứ|ử|ữ|ự|ỳ|ý|ỷ|ỹ|ỵ|À|Á|Ả|Ã|Ạ|Ầ|Ấ|Ẩ|Ẫ|Ậ|Ằ|Ắ|Ẳ|Ẵ|Ặ|È|É|Ẻ|Ẽ|Ẹ|Ề|Ế|Ể|Ễ|Ệ|Ì|Í|Ỉ|Ĩ|Ị|Ò|Ó|Ỏ|Õ|Ọ|Ồ|Ố|Ổ|Ỗ|Ộ|Ờ|Ớ|Ở|Ỡ|Ợ|Ù|Ú|Ủ|Ũ|Ụ|Ừ|Ứ|Ử|Ữ|Ự|Ỳ|Ý|Ỷ|Ỹ|Ỵ'.split(
'|')
charutf8 = "à|á|ả|ã|ạ|ầ|ấ|ẩ|ẫ|ậ|ằ|ắ|ẳ|ẵ|ặ|è|é|ẻ|ẽ|ẹ|ề|ế|ể|ễ|ệ|ì|í|ỉ|ĩ|ị|ò|ó|ỏ|õ|ọ|ồ|ố|ổ|ỗ|ộ|ờ|ớ|ở|ỡ|ợ|ù|ú|ủ|ũ|ụ|ừ|ứ|ử|ữ|ự|ỳ|ý|ỷ|ỹ|ỵ|À|Á|Ả|Ã|Ạ|Ầ|Ấ|Ẩ|Ẫ|Ậ|Ằ|Ắ|Ẳ|Ẵ|Ặ|È|É|Ẻ|Ẽ|Ẹ|Ề|Ế|Ể|Ễ|Ệ|Ì|Í|Ỉ|Ĩ|Ị|Ò|Ó|Ỏ|Õ|Ọ|Ồ|Ố|Ổ|Ỗ|Ộ|Ờ|Ớ|Ở|Ỡ|Ợ|Ù|Ú|Ủ|Ũ|Ụ|Ừ|Ứ|Ử|Ữ|Ự|Ỳ|Ý|Ỷ|Ỹ|Ỵ".split(
'|')
for i in range(len(char1252)):
dic[char1252[i]] = charutf8[i]
return dic
dicchar = loaddicchar()
# Hàm chuyển Unicode dựng sẵn về Unicde tổ hợp (phổ biến hơn)
def convert_unicode(txt):
return re.sub(
r'à|á|ả|ã|ạ|ầ|ấ|ẩ|ẫ|ậ|ằ|ắ|ẳ|ẵ|ặ|è|é|ẻ|ẽ|ẹ|ề|ế|ể|ễ|ệ|ì|í|ỉ|ĩ|ị|ò|ó|ỏ|õ|ọ|ồ|ố|ổ|ỗ|ộ|ờ|ớ|ở|ỡ|ợ|ù|ú|ủ|ũ|ụ|ừ|ứ|ử|ữ|ự|ỳ|ý|ỷ|ỹ|ỵ|À|Á|Ả|Ã|Ạ|Ầ|Ấ|Ẩ|Ẫ|Ậ|Ằ|Ắ|Ẳ|Ẵ|Ặ|È|É|Ẻ|Ẽ|Ẹ|Ề|Ế|Ể|Ễ|Ệ|Ì|Í|Ỉ|Ĩ|Ị|Ò|Ó|Ỏ|Õ|Ọ|Ồ|Ố|Ổ|Ỗ|Ộ|Ờ|Ớ|Ở|Ỡ|Ợ|Ù|Ú|Ủ|Ũ|Ụ|Ừ|Ứ|Ử|Ữ|Ự|Ỳ|Ý|Ỷ|Ỹ|Ỵ',
lambda x: dicchar[x.group()], txt)
bang_nguyen_am = [['a', 'à', 'á', 'ả', 'ã', 'ạ', 'a'],
['ă', 'ằ', 'ắ', 'ẳ', 'ẵ', 'ặ', 'aw'],
['â', 'ầ', 'ấ', 'ẩ', 'ẫ', 'ậ', 'aa'],
['e', 'è', 'é', 'ẻ', 'ẽ', 'ẹ', 'e'],
['ê', 'ề', 'ế', 'ể', 'ễ', 'ệ', 'ee'],
['i', 'ì', 'í', 'ỉ', 'ĩ', 'ị', 'i'],
['o', 'ò', 'ó', 'ỏ', 'õ', 'ọ', 'o'],
['ô', 'ồ', 'ố', 'ổ', 'ỗ', 'ộ', 'oo'],
['ơ', 'ờ', 'ớ', 'ở', 'ỡ', 'ợ', 'ow'],
['u', 'ù', 'ú', 'ủ', 'ũ', 'ụ', 'u'],
['ư', 'ừ', 'ứ', 'ử', 'ữ', 'ự', 'uw'],
['y', 'ỳ', 'ý', 'ỷ', 'ỹ', 'ỵ', 'y']]
nguyen_am_to_ids = {}
for i in range(len(bang_nguyen_am)):
for j in range(len(bang_nguyen_am[i]) - 1):
nguyen_am_to_ids[bang_nguyen_am[i][j]] = (i, j)
def chuan_hoa_dau_tu_tieng_viet(word):
if not is_valid_vietnam_word(word):
return word
chars = list(word)
dau_cau = 0
nguyen_am_index = []
qu_or_gi = False
for index, char in enumerate(chars):
x, y = nguyen_am_to_ids.get(char, (-1, -1))
if x == -1:
continue
elif x == 9: # check qu
if index != 0 and chars[index - 1] == 'q':
chars[index] = 'u'
qu_or_gi = True
elif x == 5: # check gi
if index != 0 and chars[index - 1] == 'g':
chars[index] = 'i'
qu_or_gi = True
if y != 0:
dau_cau = y
chars[index] = bang_nguyen_am[x][0]
if not qu_or_gi or index != 1:
nguyen_am_index.append(index)
if len(nguyen_am_index) < 2:
if qu_or_gi:
if len(chars) == 2:
x, y = nguyen_am_to_ids.get(chars[1])
chars[1] = bang_nguyen_am[x][dau_cau]
else:
x, y = nguyen_am_to_ids.get(chars[2], (-1, -1))
if x != -1:
chars[2] = bang_nguyen_am[x][dau_cau]
else:
chars[1] = bang_nguyen_am[5][dau_cau] if chars[1] == 'i' else bang_nguyen_am[9][dau_cau]
return ''.join(chars)
return word
for index in nguyen_am_index:
x, y = nguyen_am_to_ids[chars[index]]
if x == 4 or x == 8: # ê, ơ
chars[index] = bang_nguyen_am[x][dau_cau]
# for index2 in nguyen_am_index:
# if index2 != index:
# x, y = nguyen_am_to_ids[chars[index]]
# chars[index2] = bang_nguyen_am[x][0]
return ''.join(chars)
if len(nguyen_am_index) == 2:
if nguyen_am_index[-1] == len(chars) - 1:
x, y = nguyen_am_to_ids[chars[nguyen_am_index[0]]]
chars[nguyen_am_index[0]] = bang_nguyen_am[x][dau_cau]
# x, y = nguyen_am_to_ids[chars[nguyen_am_index[1]]]
# chars[nguyen_am_index[1]] = bang_nguyen_am[x][0]
else:
# x, y = nguyen_am_to_ids[chars[nguyen_am_index[0]]]
# chars[nguyen_am_index[0]] = bang_nguyen_am[x][0]
x, y = nguyen_am_to_ids[chars[nguyen_am_index[1]]]
chars[nguyen_am_index[1]] = bang_nguyen_am[x][dau_cau]
else:
# x, y = nguyen_am_to_ids[chars[nguyen_am_index[0]]]
# chars[nguyen_am_index[0]] = bang_nguyen_am[x][0]
x, y = nguyen_am_to_ids[chars[nguyen_am_index[1]]]
chars[nguyen_am_index[1]] = bang_nguyen_am[x][dau_cau]
# x, y = nguyen_am_to_ids[chars[nguyen_am_index[2]]]
# chars[nguyen_am_index[2]] = bang_nguyen_am[x][0]
return ''.join(chars)
def is_valid_vietnam_word(word):
chars = list(word)
nguyen_am_index = -1
for index, char in enumerate(chars):
x, y = nguyen_am_to_ids.get(char, (-1, -1))
if x != -1:
if nguyen_am_index == -1:
nguyen_am_index = index
else:
if index - nguyen_am_index != 1:
return False
nguyen_am_index = index
return True
def lowercase_remove_noise_character(sentence):
"""
Chuyển câu tiếng việt về chuẩn gõ dấu kiểu cũ.
:param sentence:
:return:
"""
sentence = sentence.lower()
words = sentence.split()
for index, word in enumerate(words):
cw = re.sub(r'(^\p{P}*)([p{L}.]*\p{L}+)(\p{P}*$)', r'\1/\2/\3', word).split('/')
words[index] = ''.join(cw)
return ' '.join(words)
def remove_html(txt):
return re.sub(r'<[^>]*>', '', txt)
# -
path_dataset1 = "/home/haiduong/Documents/DataScience-SentimentAnalysis/Data/Rawdata_Tri2.xlsx"
df1 = pd.read_excel(path_dataset1, sheet_name = 'Comment_Tiki_Data')
df1.dropna()
df1.head()
path_dataset2 = "/home/haiduong/Documents/DataScience-SentimentAnalysis/Data/10k_14k.xlsx"
df2 = pd.read_excel(path_dataset2, sheet_name = 'Comment_Tiki_Data')
df2.dropna()
df2.head()
path_dataset3 = "/home/haiduong/Documents/DataScience-SentimentAnalysis/Data/8k-10k14k-16k.xlsx"
df3 = pd.read_excel(path_dataset3, sheet_name = 'Comment_Tiki_Data')
df3.dropna()
df3.head()
path_dataset4 = "/home/haiduong/Documents/DataScience-SentimentAnalysis/Data/Rawdata-0-3000.xlsx"
df4 = pd.read_excel(path_dataset4, sheet_name = 'Comment_Tiki_Data')
df4.dropna()
df4.head()
#0: negative
#1: positive
l_review = []
l_label = []
# +
for index, row in df1.iterrows():
# if int(row["Id"]) == 5000:
# break
review = row["Review"]
label = row["Label"]
if math.isnan(label):
# print(index)
continue
label = int(label)
if label != 0 and label != 1:
continue
l_review.append(review)
l_label.append(label)
# if label == None
# if not label:
# print(label)
# -
print(len(l_label))
# +
for index, row in df2.iterrows():
# if int(row["Id"]) > 2002:
# break
review = row["Review"]
label = row["Label"]
# if math.isnan(label):
# # print(index)
# continue
if label == "negative":
label = 0
elif label == "positive":
label = 1
else:
# print(label, index)
continue
label = int(label)
if label != 0 and label != 1:
continue
l_review.append(review)
l_label.append(label)
# if label == None
# if not label:
# print(label)
# -
print(len(l_label))
# +
for index, row in df3.iterrows():
# if int(row["Id"]) >= 8000:
# if int(row["Id"]) >= 12000:
# break
review = row["Review"]
label = row["Label"]
if math.isnan(label):
# print(index)
continue
label = int(label)
if label != 0 and label != 1:
continue
l_review.append(review)
l_label.append(label)
# if label == None
# if not label:
# print(label)
# -
print(len(l_label))
print(len(l_review))
# +
for index, row in df4.iterrows():
# if int(row["Id"]) >= 8000:
# if int(row["Id"]) >= 12000:
# break
review = row["Review"]
label = row["Label"]
if math.isnan(label):
# print(index)
continue
label = int(label)
if label != 0 and label != 1:
continue
l_review.append(review)
l_label.append(label)
# if label == None
# if not label:
# print(label)
# -
print(len(l_label))
print(len(l_review))
# +
#Merger version 1 and version 2
# -
path_dataset1_v1 = "/home/haiduong/Documents/DataScience-SentimentAnalysis/Data/Data_0_5k_sentiment.xlsx"
df1_v1 = pd.read_excel(path_dataset1_v1, sheet_name = 'Dataset')
df1_v1.dropna()
df1_v1.head()
path_dataset2_v1 = "/home/haiduong/Documents/DataScience-SentimentAnalysis/Data/Dataset_version1_BMT.xlsx"
df2_v1 = pd.read_excel(path_dataset2_v1, sheet_name = 'Label_Dataset3')
df2_v1.dropna()
df2_v1.head()
path_dataset3_v1 = "/home/haiduong/Documents/DataScience-SentimentAnalysis/Data/data20k-25k.xlsx"
df3_v1 = pd.read_excel(path_dataset3_v1, sheet_name = 'Sheet1')
df3_v1.dropna()
df3_v1.head()
l_v1_review = []
l_v1_label = []
# +
for index, row in df1_v1.iterrows():
if index > 5000:
break
review = row["Review"]
label = row["Label"]
if label == "negative":
label = 0
elif label == "positive":
label = 1
else:
continue
label = int(label)
if label != 0 and label != 1:
continue
l_v1_review.append(review)
l_v1_label.append(label)
print(len(l_v1_label))
# +
for index, row in df2_v1.iterrows():
if index < 10000:
continue
if index >= 13000:
break
review = row["Review"]
label = row["label"]
if label == "negative":
label = 0
elif label == "positive":
label = 1
else:
continue
label = int(label)
if label != 0 and label != 1:
continue
l_v1_review.append(review)
l_v1_label.append(label)
print(len(l_v1_label))
# +
for index, row in df3_v1.iterrows():
# if index < 20000:
# continue
# if index >= 25000:
# break
review = row["Review"]
label = row["Label"]
if label == "negative":
label = 0
elif label == "positive":
label = 1
else:
continue
label = int(label)
if label != 0 and label != 1:
continue
l_v1_review.append(review)
l_v1_label.append(label)
print(len(l_v1_label))
# -
l_label.extend(l_v1_label)
l_review.extend(l_v1_review)
# + active=""
#
#
# -
print(len(l_label))
# '''
# LOG: Get special character excel
# '''
# log_excel_character = ""
# for index, row in df1_v1.iterrows():
# string_get = row["Review"]
# # print(string_get)
# log_excel_character = string_get[82:89]
# print(log_excel_character)
# break
# +
# save_df = pd.DataFrame(columns=["Review", "Label"])
# save_df["Review"] = l_v1_review
# save_df["Label"] = l_v1_label
# save_df.to_excel("FullDataset_version1.xlsx",sheet_name='Dataset',index = True)
# -
# # **Clean Data**
# +
def clean_data(l_review, l_label, special_excel_character):
l_convert_string = []
l_convert_label = []
l_len_word = []
for index, review_str in enumerate(l_review):
if type(review_str) != str:
continue
#remove "\n", duplicate blank space
clean_string = review_str.replace("\n","")
# clean_string = review_str.replace(special_excel_character,"")
clean_string = " ".join(clean_string.split())
#remove tag html
clean_string = remove_html(clean_string)
#remove emoji, special character
clean_string = remove_emojis(clean_string)
# format unicode
unicode_string = covert_unicode(clean_string)
#format vietnamese sign
# #lowcase
sign_string = lowercase_remove_noise_character(unicode_string)
#get only vietnamese character
sign_string = re.sub(r'[^\s\wáàảãạăắằẳẵặâấầẩẫậéèẻẽẹêếềểễệóòỏõọôốồổỗộơớờởỡợíìỉĩịúùủũụưứừửữựýỳỷỹỵđ_]','',sign_string)
sign_string = " ".join(sign_string.split())
if len(sign_string) <= 1:
continue
# print(sign_string)
#tokenize
token_string = word_tokenize(sign_string, format="text")
log_token = word_tokenize(sign_string)
l_len_word.append(len(log_token))
l_convert_string.append(token_string)
l_convert_label.append(l_label[index])
return l_convert_string, l_convert_label, l_len_word
# -
l_convert_string, l_convert_label, l_len_word = clean_data(l_review,l_label, log_excel_character)
print(len(l_convert_string))
print(l_convert_string[-10:])
# +
# import random
# random.shuffle(l_convert_string)
# -
save_df = pd.DataFrame(columns=["Review", "Label"])
save_df["Review"] = l_convert_string
save_df["Label"] = l_convert_label
save_df.to_excel("Dataset_24_12_version1Mergeversion2.xlsx",sheet_name='Dataset',index = True)
# +
# multilable_dataset = pd.DataFrame()
# multilable_dataset = pd.DataFrame(columns=["Review", "<NAME>", "<NAME>"])
# multilable_dataset["Review"] = l_convert_string
# multilable_dataset.to_excel("multilable_dataset.xlsx",sheet_name='Dataset',index = True)
# -
#
# # **ReClean and Merge dataset**
path_dataset = "/home/haiduong/Documents/DataScience-SentimentAnalysis/Data/data20k-25k.xlsx"
df = pd.read_excel(path_dataset, sheet_name = 'Sheet1')
path_dataset2 = "/home/haiduong/Documents/DataScience-SentimentAnalysis/Data/Data_0_5k_sentiment.xlsx"
df2 = pd.read_excel(path_dataset2, sheet_name = 'Dataset')
# +
l_review = []
l_label = []
for index, row in df.iterrows():
review = row["Review"]
label = row["Label"]
l_review.append(review)
l_label.append(label)
for index, row in df2.iterrows():
review = row["Review"]
label = row["Label"]
l_review.append(review)
l_label.append(label)
if index >= 5000:
break
# -
print(len(l_review))
# +
l_convert_review = []
l_convert_label = []
# nagative 0
# positive 1
# None 2
l_len_word = []
for index, review in enumerate(l_review):
review_str = review
label = l_label[index]
if type(review_str) != str:
print(type(review_str))
continue
#remove "\n", duplicate blank space
clean_string = review_str.replace("\n","")
clean_string = " ".join(clean_string.split())
#remove tag html
clean_string = remove_html(clean_string)
#remove emoji, special character
clean_string = remove_emojis(clean_string)
# format unicode
unicode_string = covert_unicode(clean_string)
#format vietnamese sign
if len(unicode_string) <= 1:
continue
sign_string = lowercase_remove_noise_character(unicode_string)
sign_string = re.sub(r'[^\s\wáàảãạăắằẳẵặâấầẩẫậéèẻẽẹêếềểễệóòỏõọôốồổỗộơớờởỡợíìỉĩịúùủũụưứừửữựýỳỷỹỵđ_]','',sign_string)
sign_string = " ".join(sign_string.split())
token_string = word_tokenize(sign_string, format="text")
log_token = word_tokenize(sign_string)
if len(token_string) <= 1:
continue
if label == "negative":
label = 0
elif label == "positive":
label = 1
else:
label = 2
l_len_word.append(len(log_token))
l_convert_review.append(token_string)
l_convert_label.append(label)
# -
print(len(l_convert_review))
# +
# print(l_convert_review[:100])
# -
save_df = pd.DataFrame(columns=["Review", "Label"])
save_df["Review"] = l_convert_review
save_df["Label"] = l_convert_label
save_df.to_excel("Merge_0-5k_20-25k.xlsx",sheet_name='Dataset',index = True)
l_convert_review = l_convert_string
# # **EDA DATA**
#Log freq word in all labels
dictionary_all = {}
for index, sentence in enumerate(l_convert_review):
# if l_label[index] == 2:
tmp = sentence.split(" ")
for word in tmp:
if word in dictionary_all:
dictionary_all[word] += 1
else:
dictionary_all[word] = 1
sorted_freq_all = {k: v for k, v in sorted(dictionary_all.items(), key=lambda item: item[1],reverse=True)}
#Log in negative
dictionary_negative = {}
for index, sentence in enumerate(l_convert_review):
if l_convert_label[index] == 0:
tmp = sentence.split(" ")
for word in tmp:
if word in dictionary_negative:
dictionary_negative[word] += 1
else:
dictionary_negative[word] = 1
sorted_freq_negative = {k: v for k, v in sorted(dictionary_negative.items(), key=lambda item: item[1],reverse=True)}
#Log in positive
dictionary_positive = {}
for index, sentence in enumerate(l_convert_review):
if l_convert_label[index] == 1:
tmp = sentence.split(" ")
for word in tmp:
if word in dictionary_positive:
dictionary_positive[word] += 1
else:
dictionary_positive[word] = 1
sorted_freq_positive = {k: v for k, v in sorted(dictionary_positive.items(), key=lambda item: item[1],reverse=True)}
#Log in None
dictionary_none = {}
for index, sentence in enumerate(l_convert_review):
if l_convert_label[index] == 2:
tmp = sentence.split(" ")
for word in tmp:
if word in dictionary_none:
dictionary_none[word] += 1
else:
dictionary_none[word] = 1
sorted_freq_none = {k: v for k, v in sorted(dictionary_none.items(), key=lambda item: item[1],reverse=True)}
#Log top 30
log_freq = []
log_ratio = []
log_freq_negative = []
log_freq_positive = []
log_freq_none = []
count = 0
for key in sorted_freq_all.keys():
count += 1
if count >= 30:
break
freq_total = sorted_freq_all[key]
freq_negative = sorted_freq_negative[key]
freq_positive = sorted_freq_positive[key]
if key not in sorted_freq_none:
freq_non = 0
else:
freq_non = sorted_freq_none[key]
ratio = freq_negative / freq_positive
log_freq_negative.append(freq_negative)
log_freq_positive.append(freq_positive)
log_freq_none.append(freq_non)
log_freq.append(key)
print(log_freq)
# # **Plot bar char**
# +
import numpy as np
import matplotlib.pyplot as plt
# plt.figure(figsize=(8, 24), dpi=800)
# plt.rcParams.update({'font.size': 18})
ind = np.arange(len(log_freq_negative))
width = 0.35
fig = plt.subplots(figsize =(36, 8), dpi=80)
p1 = plt.bar(ind, log_freq_negative, width)
p2 = plt.bar(ind, log_freq_positive, width, bottom = log_freq_negative)
plt.ylabel('Count')
plt.title('Bar char the number of occurrences word ')
plt.xticks(ind, log_freq)
# plt.yticks(np.arange(0, 81, 10))
plt.legend((p1[0], p2[0]), ('Negative', 'Positive'))
plt.show()
# +
import numpy as np
import matplotlib.pyplot as plt
# plt.figure(figsize=(8, 24), dpi=800)
# plt.rcParams.update({'font.size': 18})
ind = np.arange(len(log_freq_negative))
width = 0.35
fig = plt.subplots(figsize =(36, 8), dpi=80)
p1 = plt.bar(ind, log_freq_negative, width)
plt.ylabel('Count')
plt.title('Bar char the number of occurrences word in Negative')
plt.xticks(ind, log_freq)
plt.show()
# +
import numpy as np
import matplotlib.pyplot as plt
# plt.figure(figsize=(8, 24), dpi=800)
# plt.rcParams.update({'font.size': 18})
ind = np.arange(len(log_freq_positive))
width = 0.35
fig = plt.subplots(figsize =(36, 8), dpi=80)
p1 = plt.bar(ind, log_freq_positive, width)
plt.ylabel('Count')
plt.title('Bar char the number of occurrences word in Positive')
plt.xticks(ind, log_freq)
plt.show()
# +
import numpy as np
import matplotlib.pyplot as plt
ind = np.arange(len(log_freq_none))
width = 0.35
fig = plt.subplots(figsize =(36, 8), dpi=80)
p1 = plt.bar(ind, log_freq_none, width)
plt.ylabel('Count')
plt.title('Bar char the number of occurrences word in None')
plt.xticks(ind, log_freq)
plt.show()
# -
# # **PLot Histogram**
# +
#Histogram plot for len word in a review
review_data = np.array(l_len_word)
fig = plt.figure(figsize =(36, 8), dpi=160)
d = np.diff(np.unique(review_data)).min()
left_of_first_bin = review_data.min() - float(d)/2
right_of_last_bin = review_data.max() + float(d)/2
plt.title('Histogram plot for len word in a review')
plt.hist(review_data, np.arange(left_of_first_bin, right_of_last_bin + d, d))
plt.show()
# -
# # **Plot Box plot**
# +
#Plot Box plot for len word in a review
data = np.array(l_len_word)
fig = plt.figure(figsize =(10, 7))
# Creating plot
plt.boxplot(data)
# show plot
plt.show()
# -
# # **Preprocessing**
# Remove StopWord and set max length sentence
# Set top 30 stop word
# Set max length = 200
#Reload data
path_dataset = "/home/haiduong/Documents/DataScience-SentimentAnalysis/Clean_dataset.xlsx"
dataframe = pd.read_excel(path_dataset, sheet_name = 'Dataset')
dataframe.head()
# log_freq
MAX_LENGTH = 200
def remove_stop_word(sentence, log_stopword):
for word in log_stopword:
sentence = sentence.replace(word,"")
return sentence
def set_length_sentence(sentence):
tmp = sentence.split(" ")
if len(tmp) > MAX_LENGTH:
tmp = tmp[:MAX_LENGTH]
print("set length")
sentence = " ".join(tmp)
return sentence
log_stopword = ["hàng", "mình","có","mua", "giao", "thì", "sản_phẩm","là", "và", "nhưng", "nên", "shop", "tiki", "cho", "như", "1", "ko", "này", "với"]
print(len(log_stopword))
l_convert_review_nonstopword = []
l_convert_label_nonstopword = []
for index, review in enumerate(l_convert_review):
un_stopword_review = remove_stop_word(review,log_stopword)
review = set_length_sentence(un_stopword_review)
if len(review) < 1:
continue
l_convert_review_nonstopword.append(review)
l_convert_label_nonstopword.append(l_convert_label[index])
l_convert_review = []
l_label_ship = []
l_label_sp = []
for index, row in dataframe.iterrows():
if pd.isnull(dataframe.at[index,'Label_Ship']) or pd.isnull(dataframe.at[index,'Label_SP']):
continue
review = str(row["Review"])
label_ship = int(row["Label_Ship"])
label_sp = int(row["Label_SP"])
if label_ship != 0 and label_ship != 1:
print(label_ship)
if label_sp != 0 and label_sp != 1:
print(label_sp)
un_stopword_review = remove_stop_word(review,log_stopword)
review = set_length_sentence(review)
l_convert_review.append(review)
l_label_ship.append(label_ship)
l_label_sp.append(label_sp)
# +
save_df = pd.DataFrame(columns=["Review", "Label"])
save_df["Review"] = l_convert_review_nonstopword
save_df["Label"] = l_convert_label_nonstopword
save_df.to_excel("Dataset_23_12_non_stopword.xlsx",sheet_name='Dataset',index = True)
# -
| Clean and EDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import numpy as np
import pylab as plt
import ngene as ng
from ngene.architectures.simple import architecture
import ccgpack as ccg
import tensorflow as tf
from tqdm import tqdm, trange
from scipy.stats import ttest_ind
cl = np.load('../data/cl_planck_lensed.npy')
sfs = ccg.StochasticFieldSimulator(cl)
size = 7.2
class DataProvider(object):
def __init__(self,nside,size,alpha,num,n_buffer=200,reinit=1000):
self.nside = nside
self.alpha = alpha
self.num = num
self.size = size
self.n_buffer = n_buffer
self.reinit = reinit
self.couter = 0
def simulate(self):
s = np.zeros((self.nside, self.nside), dtype=np.double)
begins = ccg.random_inside(s,num=self.num)
ends = ccg.random_inside(s,num=self.num)
g = sfs.simulate(self.nside,self.size)
g -= g.min()
g /= g.max()
s = ccg.draw_line(s,begins=begins,ends=ends,value=1)
return g,s
def simulation_initiation(self):
gs = []
ss = []
# for i in tqdm(range(self.n_buffer), total=self.n_buffer, unit=" map", desc='Initiation', ncols=70):
for i in range(self.n_buffer):
g,s = self.simulate()
gs.append(g)
ss.append(s)
return np.array(gs),np.array(ss)
def __call__(self,n,alpha=None):
if self.couter%self.reinit==0:
self.gs, self.ss = self.simulation_initiation()
if alpha is None:
alpha = self.alpha
self.couter += 1
x_out = []
y_out = []
for i in range(n):
i_g,i_s = np.random.randint(0,self.n_buffer,2)
x_out.append(self.gs[i_g]+alpha*self.ss[i_s])
y_out.append(self.ss[i_s])
x_out = np.array(x_out)
y_out = np.array(y_out)
return np.expand_dims(x_out,-1),np.expand_dims(y_out,-1)
# +
nside=200
dp = DataProvider(nside=nside,size=7,alpha=0.7,num=50)
dp0 = DataProvider(nside=nside,size=7,alpha=0,num=50,n_buffer=100)
x,y = dp0(1)
x,y = dp(1)
fig, (ax1,ax2)= plt.subplots(ncols=2, nrows=1, figsize=(20, 10))
ax1.imshow(x[0,:,:,0])
ax1.axis('off')
ax2.imshow(y[0,:,:,0])
ax2.axis('off')
# +
def arch(x_in):
x_out = architecture(x_in=x_in,n_layers=5,res=2)
return x_out
def check(model,dp,dp0):
l0 = []
l1 = []
for i in range(100):
x,y = dp(1)
x0,y = dp0(1)
l0.append(model.conv(x0).std())
l1.append(model.conv(x).std())
b0,h0 = ccg.pdf(l0,20)
b1,h1 = ccg.pdf(l1,20)
plt.plot(b0,h0)
plt.plot(b1,h1)
print('p-value:',ttest_ind(l0,l1)[1])
return ttest_ind(l0,l1)[1]
# +
model = ng.Model(nx=nside,ny=nside,n_channel=1,n_class=1,
restore=0,model_add='./model/'+str(0),arch=arch)
print('# of variables:',model.n_variables)
# +
# model.train(data_provider=dp,training_epochs = 10,iterations=20 ,n_s = 10,
# learning_rate = 0.01, time_limit=None,
# metric=None, verbose=1,death_preliminary_check = 30,
# death_frequency_check = 1000)
# pv = check()
# +
alphas = []
success = []
dalpha = 0.005
p_move = 0
for i in range(5):
model.train(data_provider=dp,training_epochs = 5,iterations=10 ,n_s = 10,
learning_rate = 0.01, time_limit=None,
metric=None, verbose=1,death_preliminary_check = 30,
death_frequency_check = 1000)
pv = check()
print(pv)
if pv<1e-7:
if p_move == 1:
dalpha = dalpha/2
while dalpha>dp.alpha:
dalpha = dalpha/2
dp.alpha = dp.alpha-dalpha
p_move = -1
else:
if p_move == -1:
dalpha = dalpha/2
while dalpha>dp.alpha:
dalpha = dalpha/2
dp.alpha = dp.alpha+dalpha
p_move = 1
success.append(p_move)
alphas.append(dp.alpha)
print(dp.alpha)
model.model_add='./model/'+str(i+1)+'_'+str(dp.alpha)
# -
# +
alphas = []
success = []
dalpha = 0.05
pv_lim = 1e-7
training_epochs = 1
iterations=10
n_s = 10
i = 0
for _ in range(10):
alphas.append(dp.alpha)
model.model_add='./model/'+str(i)
print('Training model:{}, alpha:{}'.format(model.model_add,dp.alpha))
model.train(data_provider=dp,training_epochs=training_epochs,
iterations=iterations,n_s=n_s,
learning_rate=0.01, time_limit=None,
metric=None, verbose=1,death_preliminary_check=30,
death_frequency_check=1000)
pv = check(model,dp,dp0)
if pv>pv_lim and i!=0:
dp.alpha = dp.alpha+dalpha
if np.random.uniform()>0.5:
dalpha = dalpha/2
model.model_add='./model/'+str(i-1)
model.restore()
else:
dp.alpha = dp.alpha-dalpha
i += 1
success.append(pv<pv_lim)
# -
# +
fig,(ax1,ax2,ax3) = plt.subplots(ncols=3,nrows=1,figsize=(15,7))
x,y = dp(1)
x_pred = model.conv(x)
ax1.imshow(x[0,:,:,0])
ax1.set_title('Input')
ax2.imshow(y[0,:,:,0])
ax2.set_title('Output')
ax3.imshow(x_pred[:,:,0])
ax3.set_title('Prediction')
# -
| notebooks/.ipynb_checkpoints/line-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# # !wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip
# # !unzip uncased_L-12_H-768_A-12.zip
# # !wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json
# # !wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json
# -
class SquadExample(object):
"""A single training/test example for simple sequence classification.
For examples without an answer, the start and end position are -1.
"""
def __init__(self,
qas_id,
question_text,
doc_tokens,
orig_answer_text=None,
start_position=None,
end_position=None,
is_impossible=False):
self.qas_id = qas_id
self.question_text = question_text
self.doc_tokens = doc_tokens
self.orig_answer_text = orig_answer_text
self.start_position = start_position
self.end_position = end_position
self.is_impossible = is_impossible
def __str__(self):
return self.__repr__()
def __repr__(self):
s = ""
s += "qas_id: %s" % (tokenization.printable_text(self.qas_id))
s += ", question_text: %s" % (
tokenization.printable_text(self.question_text))
s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
if self.start_position:
s += ", start_position: %d" % (self.start_position)
if self.start_position:
s += ", end_position: %d" % (self.end_position)
if self.start_position:
s += ", is_impossible: %r" % (self.is_impossible)
return s
# +
import tensorflow as tf
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
from bert import modeling
from tqdm import tqdm
import json
import math
BERT_VOCAB = 'uncased_L-12_H-768_A-12/vocab.txt'
BERT_INIT_CHKPNT = 'uncased_L-12_H-768_A-12/bert_model.ckpt'
BERT_CONFIG = 'uncased_L-12_H-768_A-12/bert_config.json'
tokenization.validate_case_matches_checkpoint(True,BERT_INIT_CHKPNT)
tokenizer = tokenization.FullTokenizer(
vocab_file=BERT_VOCAB, do_lower_case=True)
# +
version_2_with_negative = False
def read_squad_examples(input_file, is_training, version_2_with_negative = False):
"""Read a SQuAD json file into a list of SquadExample."""
with tf.gfile.Open(input_file, 'r') as reader:
input_data = json.load(reader)['data']
def is_whitespace(c):
if c == ' ' or c == '\t' or c == '\r' or c == '\n' or ord(c) == 0x202F:
return True
return False
examples = []
for entry in input_data:
for paragraph in entry['paragraphs']:
paragraph_text = paragraph['context']
doc_tokens = []
char_to_word_offset = []
prev_is_whitespace = True
for c in paragraph_text:
if is_whitespace(c):
prev_is_whitespace = True
else:
if prev_is_whitespace:
doc_tokens.append(c)
else:
doc_tokens[-1] += c
prev_is_whitespace = False
char_to_word_offset.append(len(doc_tokens) - 1)
for qa in paragraph['qas']:
qas_id = qa['id']
question_text = qa['question']
start_position = None
end_position = None
orig_answer_text = None
is_impossible = False
if is_training:
if version_2_with_negative:
is_impossible = qa['is_impossible']
if (len(qa['answers']) != 1) and (not is_impossible):
raise ValueError(
'For training, each question should have exactly 1 answer.'
)
if not is_impossible:
answer = qa['answers'][0]
orig_answer_text = answer['text']
answer_offset = answer['answer_start']
answer_length = len(orig_answer_text)
start_position = char_to_word_offset[answer_offset]
end_position = char_to_word_offset[
answer_offset + answer_length - 1
]
actual_text = ' '.join(
doc_tokens[start_position : (end_position + 1)]
)
cleaned_answer_text = ' '.join(
tokenization.whitespace_tokenize(orig_answer_text)
)
if actual_text.find(cleaned_answer_text) == -1:
tf.logging.warning(
"Could not find answer: '%s' vs. '%s'",
actual_text,
cleaned_answer_text,
)
continue
else:
start_position = -1
end_position = -1
orig_answer_text = ''
example = SquadExample(
qas_id = qas_id,
question_text = question_text,
doc_tokens = doc_tokens,
orig_answer_text = orig_answer_text,
start_position = start_position,
end_position = end_position,
is_impossible = is_impossible,
)
examples.append(example)
return examples
# -
squad_train = read_squad_examples('train-v1.1.json', True)
squad_test = read_squad_examples('dev-v1.1.json', False)
# +
import six
def _improve_answer_span(
doc_tokens, input_start, input_end, tokenizer, orig_answer_text
):
"""Returns tokenized answer spans that better match the annotated answer."""
# The SQuAD annotations are character based. We first project them to
# whitespace-tokenized words. But then after WordPiece tokenization, we can
# often find a "better match". For example:
#
# Question: What year was <NAME> born?
# Context: The leader was <NAME> (1895-1943).
# Answer: 1895
#
# The original whitespace-tokenized answer will be "(1895-1943).". However
# after tokenization, our tokens will be "( 1895 - 1943 ) .". So we can match
# the exact answer, 1895.
#
# However, this is not always possible. Consider the following:
#
# Question: What country is the top exporter of electornics?
# Context: The Japanese electronics industry is the lagest in the world.
# Answer: Japan
#
# In this case, the annotator chose "Japan" as a character sub-span of
# the word "Japanese". Since our WordPiece tokenizer does not split
# "Japanese", we just use "Japanese" as the annotation. This is fairly rare
# in SQuAD, but does happen.
tok_answer_text = ' '.join(tokenizer.tokenize(orig_answer_text))
for new_start in range(input_start, input_end + 1):
for new_end in range(input_end, new_start - 1, -1):
text_span = ' '.join(doc_tokens[new_start : (new_end + 1)])
if text_span == tok_answer_text:
return (new_start, new_end)
return (input_start, input_end)
def _check_is_max_context(doc_spans, cur_span_index, position):
"""Check if this is the 'max context' doc span for the token."""
# Because of the sliding window approach taken to scoring documents, a single
# token can appear in multiple documents. E.g.
# Doc: the man went to the store and bought a gallon of milk
# Span A: the man went to the
# Span B: to the store and bought
# Span C: and bought a gallon of
# ...
#
# Now the word 'bought' will have two scores from spans B and C. We only
# want to consider the score with "maximum context", which we define as
# the *minimum* of its left and right context (the *sum* of left and
# right context will always be the same, of course).
#
# In the example the maximum context for 'bought' would be span C since
# it has 1 left context and 3 right context, while span B has 4 left context
# and 0 right context.
best_score = None
best_span_index = None
for (span_index, doc_span) in enumerate(doc_spans):
end = doc_span.start + doc_span.length - 1
if position < doc_span.start:
continue
if position > end:
continue
num_left_context = position - doc_span.start
num_right_context = end - position
score = (
min(num_left_context, num_right_context) + 0.01 * doc_span.length
)
if best_score is None or score > best_score:
best_score = score
best_span_index = span_index
return cur_span_index == best_span_index
def get_final_text(pred_text, orig_text, do_lower_case):
"""Project the tokenized prediction back to the original text."""
# When we created the data, we kept track of the alignment between original
# (whitespace tokenized) tokens and our WordPiece tokenized tokens. So
# now `orig_text` contains the span of our original text corresponding to the
# span that we predicted.
#
# However, `orig_text` may contain extra characters that we don't want in
# our prediction.
#
# For example, let's say:
# pred_text = <NAME>
# orig_text = <NAME>
#
# We don't want to return `orig_text` because it contains the extra "'s".
#
# We don't want to return `pred_text` because it's already been normalized
# (the SQuAD eval script also does punctuation stripping/lower casing but
# our tokenizer does additional normalization like stripping accent
# characters).
#
# What we really want to return is "<NAME>".
#
# Therefore, we have to apply a semi-complicated alignment heruistic between
# `pred_text` and `orig_text` to get a character-to-charcter alignment. This
# can fail in certain cases in which case we just return `orig_text`.
def _strip_spaces(text):
ns_chars = []
ns_to_s_map = collections.OrderedDict()
for (i, c) in enumerate(text):
if c == ' ':
continue
ns_to_s_map[len(ns_chars)] = i
ns_chars.append(c)
ns_text = ''.join(ns_chars)
return (ns_text, ns_to_s_map)
# We first tokenize `orig_text`, strip whitespace from the result
# and `pred_text`, and check if they are the same length. If they are
# NOT the same length, the heuristic has failed. If they are the same
# length, we assume the characters are one-to-one aligned.
tokenizer = tokenization.BasicTokenizer(do_lower_case = do_lower_case)
tok_text = ' '.join(tokenizer.tokenize(orig_text))
start_position = tok_text.find(pred_text)
if start_position == -1:
return orig_text
end_position = start_position + len(pred_text) - 1
(orig_ns_text, orig_ns_to_s_map) = _strip_spaces(orig_text)
(tok_ns_text, tok_ns_to_s_map) = _strip_spaces(tok_text)
if len(orig_ns_text) != len(tok_ns_text):
return orig_text
# We then project the characters in `pred_text` back to `orig_text` using
# the character-to-character alignment.
tok_s_to_ns_map = {}
for (i, tok_index) in six.iteritems(tok_ns_to_s_map):
tok_s_to_ns_map[tok_index] = i
orig_start_position = None
if start_position in tok_s_to_ns_map:
ns_start_position = tok_s_to_ns_map[start_position]
if ns_start_position in orig_ns_to_s_map:
orig_start_position = orig_ns_to_s_map[ns_start_position]
if orig_start_position is None:
return orig_text
orig_end_position = None
if end_position in tok_s_to_ns_map:
ns_end_position = tok_s_to_ns_map[end_position]
if ns_end_position in orig_ns_to_s_map:
orig_end_position = orig_ns_to_s_map[ns_end_position]
if orig_end_position is None:
return orig_text
output_text = orig_text[orig_start_position : (orig_end_position + 1)]
return output_text
# +
max_seq_length = 384
doc_stride = 128
max_query_length = 64
import collections
def example_feature(examples, is_training = True):
inputs_ids, input_masks, segment_ids, start_positions, end_positions = [], [], [], [], []
token_to_orig_maps, token_is_max_contexts, tokenss = [], [], []
indices = []
for (example_index, example) in enumerate(examples):
query_tokens = tokenizer.tokenize(example.question_text)
if len(query_tokens) > max_query_length:
query_tokens = query_tokens[:max_query_length]
tok_to_orig_index = []
orig_to_tok_index = []
all_doc_tokens = []
for (i, token) in enumerate(example.doc_tokens):
orig_to_tok_index.append(len(all_doc_tokens))
sub_tokens = tokenizer.tokenize(token)
for sub_token in sub_tokens:
tok_to_orig_index.append(i)
all_doc_tokens.append(sub_token)
tok_start_position = None
tok_end_position = None
if is_training and example.is_impossible:
tok_start_position = -1
tok_end_position = -1
if is_training and not example.is_impossible:
tok_start_position = orig_to_tok_index[example.start_position]
if example.end_position < len(example.doc_tokens) - 1:
tok_end_position = orig_to_tok_index[example.end_position + 1] - 1
else:
tok_end_position = len(all_doc_tokens) - 1
(tok_start_position, tok_end_position) = _improve_answer_span(
all_doc_tokens, tok_start_position, tok_end_position, tokenizer,
example.orig_answer_text)
max_tokens_for_doc = max_seq_length - len(query_tokens) - 3
_DocSpan = collections.namedtuple("DocSpan", ["start", "length"])
doc_spans = []
start_offset = 0
while start_offset < len(all_doc_tokens):
length = len(all_doc_tokens) - start_offset
if length > max_tokens_for_doc:
length = max_tokens_for_doc
doc_spans.append(_DocSpan(start=start_offset, length=length))
if start_offset + length == len(all_doc_tokens):
break
start_offset += min(length, doc_stride)
for (doc_span_index, doc_span) in enumerate(doc_spans):
tokens = []
token_to_orig_map = {}
token_is_max_context = {}
segment_id = []
tokens.append('[CLS]')
segment_id.append(0)
for token in query_tokens:
tokens.append(token)
segment_id.append(0)
tokens.append('[SEP]')
segment_id.append(0)
for i in range(doc_span.length):
split_token_index = doc_span.start + i
token_to_orig_map[len(tokens)] = tok_to_orig_index[
split_token_index
]
is_max_context = _check_is_max_context(
doc_spans, doc_span_index, split_token_index
)
token_is_max_context[len(tokens)] = is_max_context
tokens.append(all_doc_tokens[split_token_index])
segment_id.append(1)
tokens.append('[SEP]')
segment_id.append(1)
input_id = tokenizer.convert_tokens_to_ids(tokens)
input_mask = [1] * len(input_id)
while len(input_id) < max_seq_length:
input_id.append(0)
input_mask.append(0)
segment_id.append(0)
assert len(input_id) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_id) == max_seq_length
start_position = None
end_position = None
if is_training and not example.is_impossible:
# For training, if our document chunk does not contain an annotation
# we throw it out, since there is nothing to predict.
doc_start = doc_span.start
doc_end = doc_span.start + doc_span.length - 1
out_of_span = False
if not (
tok_start_position >= doc_start
and tok_end_position <= doc_end
):
out_of_span = True
if out_of_span:
start_position = 0
end_position = 0
else:
doc_offset = len(query_tokens) + 2
start_position = tok_start_position - doc_start + doc_offset
end_position = tok_end_position - doc_start + doc_offset
if is_training and example.is_impossible:
start_position = 0
end_position = 0
inputs_ids.append(input_id)
input_masks.append(input_mask)
segment_ids.append(segment_id)
start_positions.append(start_position)
end_positions.append(end_position)
token_is_max_contexts.append(token_is_max_context)
token_to_orig_maps.append(token_to_orig_map)
tokenss.append(tokens)
indices.append(example_index)
return (inputs_ids, input_masks, segment_ids, start_positions,
end_positions, token_to_orig_maps, token_is_max_contexts, tokenss, indices)
# -
train_inputs_ids, train_input_masks, train_segment_ids, \
train_start_positions, train_end_positions, \
train_token_to_orig_maps, train_token_is_max_contexts, train_tokens, train_indices = example_feature(squad_train)
test_inputs_ids, test_input_masks, test_segment_ids, \
test_start_positions, test_end_positions, \
test_token_to_orig_maps, test_token_is_max_contexts, test_tokens, test_indices = example_feature(squad_test, False)
bert_config = modeling.BertConfig.from_json_file(BERT_CONFIG)
epoch = 4
batch_size = 12
warmup_proportion = 0.1
n_best_size = 20
num_train_steps = int(len(train_inputs_ids) / batch_size * epoch)
num_warmup_steps = int(num_train_steps * warmup_proportion)
class Model:
def __init__(
self,
learning_rate = 2e-5,
):
self.X = tf.placeholder(tf.int32, [None, None])
self.segment_ids = tf.placeholder(tf.int32, [None, None])
self.input_masks = tf.placeholder(tf.int32, [None, None])
self.start_positions = tf.placeholder(tf.int32, [None])
self.end_positions = tf.placeholder(tf.int32, [None])
model = modeling.BertModel(
config=bert_config,
is_training=True,
input_ids=self.X,
input_mask=self.input_masks,
token_type_ids=self.segment_ids,
use_one_hot_embeddings=False)
final_hidden = model.get_sequence_output()
final_hidden_shape = modeling.get_shape_list(final_hidden, expected_rank=3)
batch_size = final_hidden_shape[0]
seq_length = final_hidden_shape[1]
hidden_size = final_hidden_shape[2]
output_weights = tf.get_variable(
"cls/squad/output_weights", [2, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"cls/squad/output_bias", [2], initializer=tf.zeros_initializer())
final_hidden_matrix = tf.reshape(final_hidden,
[batch_size * seq_length, hidden_size])
logits = tf.matmul(final_hidden_matrix, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
logits = tf.reshape(logits, [batch_size, seq_length, 2])
logits = tf.transpose(logits, [2, 0, 1])
unstacked_logits = tf.unstack(logits, axis=0)
(self.start_logits, self.end_logits) = (unstacked_logits[0], unstacked_logits[1])
print(self.start_logits, self.end_logits)
def compute_loss(logits, positions):
one_hot_positions = tf.one_hot(
positions, depth=seq_length, dtype=tf.float32)
log_probs = tf.nn.log_softmax(logits, axis=-1)
loss = -tf.reduce_mean(
tf.reduce_sum(one_hot_positions * log_probs, axis=-1))
return loss
start_loss = compute_loss(self.start_logits, self.start_positions)
end_loss = compute_loss(self.end_logits, self.end_positions)
self.cost = (start_loss + end_loss) / 2.0
self.optimizer = optimization.create_optimizer(self.cost, learning_rate,
num_train_steps, num_warmup_steps, False)
# +
learning_rate = 2e-5
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(
learning_rate
)
sess.run(tf.global_variables_initializer())
var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope = 'bert')
saver = tf.train.Saver(var_list = var_lists)
saver.restore(sess, BERT_INIT_CHKPNT)
# -
for e in range(epoch):
train_loss = 0
pbar = tqdm(
range(0, len(train_inputs_ids), batch_size), desc = 'train minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(train_inputs_ids))
batch_ids = train_inputs_ids[i: index]
batch_masks = train_input_masks[i: index]
batch_segment = train_segment_ids[i: index]
batch_start = train_start_positions[i: index]
batch_end = train_end_positions[i: index]
cost, _ = sess.run(
[model.cost, model.optimizer],
feed_dict = {
model.start_positions: batch_start,
model.end_positions: batch_end,
model.X: batch_ids,
model.segment_ids: batch_segment,
model.input_masks: batch_masks
},
)
pbar.set_postfix(cost = cost)
train_loss += cost
train_loss /= len(train_inputs_ids) / batch_size
print(
'epoch: %d, training loss: %f\n'
% (e, train_loss)
)
len(test_inputs_ids)
batch_ids = test_inputs_ids[:10]
batch_masks = test_input_masks[:10]
batch_segment = test_segment_ids[:10]
batch_start = test_start_positions[:10]
batch_end = test_end_positions[:10]
p = []
p.extend(start_logits.tolist())
p.extend(start_logits.tolist())
np.array(p).shape
starts, ends = [], []
pbar = tqdm(
range(0, len(test_inputs_ids), batch_size), desc = 'test minibatch loop'
)
for i in pbar:
index = min(i + batch_size, len(test_inputs_ids))
batch_ids = test_inputs_ids[i: index]
batch_masks = test_input_masks[i: index]
batch_segment = test_segment_ids[i: index]
start_logits, end_logits = sess.run(
[model.start_logits, model.end_logits],
feed_dict = {
model.X: batch_ids,
model.segment_ids: batch_segment,
model.input_masks: batch_masks
},
)
starts.extend(start_logits.tolist())
ends.extend(end_logits.tolist())
# +
import numpy as np
def _get_best_indexes(logits, n_best_size):
index_and_score = sorted(
enumerate(logits), key = lambda x: x[1], reverse = True
)
best_indexes = []
for i in range(len(index_and_score)):
if i >= n_best_size:
break
best_indexes.append(index_and_score[i][0])
return best_indexes
def _compute_softmax(scores):
"""Compute softmax probability over raw logits."""
if not scores:
return []
max_score = None
for score in scores:
if max_score is None or score > max_score:
max_score = score
exp_scores = []
total_sum = 0.0
for score in scores:
x = math.exp(score - max_score)
exp_scores.append(x)
total_sum += x
probs = []
for score in exp_scores:
probs.append(score / total_sum)
return probs
# -
def to_predict(
indices,
examples,
start_logits,
end_logits,
tokens,
token_to_orig_maps,
token_is_max_contexts,
max_answer_length = 30,
n_best_size = 20,
do_lower_case = False,
null_score_diff_threshold = 0.0,
output_prediction_file = 'predictions.json',
output_nbest_file = 'nbest_predictions.json',
output_null_log_odds_file = 'null_odds.json',
):
example_index_to_features = collections.defaultdict(list)
for no, feature in enumerate(indices):
example_index_to_features[feature].append(no)
all_predictions = collections.OrderedDict()
all_nbest_json = collections.OrderedDict()
scores_diff_json = collections.OrderedDict()
_PrelimPrediction = collections.namedtuple(
'PrelimPrediction',
[
'feature_index',
'start_index',
'end_index',
'start_logit',
'end_logit',
],
)
for (example_index, example) in enumerate(examples):
features = example_index_to_features[example_index]
prelim_predictions = []
score_null = 1000000
min_null_feature_index = 0
null_start_logit = 0
null_end_logit = 0
for (feature_index, i) in enumerate(features):
start_indexes = _get_best_indexes(start_logits[i], n_best_size)
end_indexes = _get_best_indexes(end_logits[i], n_best_size)
if version_2_with_negative:
feature_null_score = start_logits[i][0] + end_logits[i][0]
if feature_null_score < score_null:
score_null = feature_null_score
min_null_feature_index = feature_index
null_start_logit = start_logits[i][0]
null_end_logit = end_logits[i][0]
for start_index in start_indexes:
for end_index in end_indexes:
if start_index >= len(tokens[i]):
continue
if end_index >= len(tokens[i]):
continue
if start_index not in token_to_orig_maps[i]:
continue
if end_index not in token_to_orig_maps[i]:
continue
if not token_is_max_contexts[i].get(start_index, False):
continue
if end_index < start_index:
continue
length = end_index - start_index + 1
if length > max_answer_length:
continue
prelim_predictions.append(
_PrelimPrediction(
feature_index = i,
start_index = start_index,
end_index = end_index,
start_logit = start_logits[i][start_index],
end_logit = end_logits[i][end_index],
)
)
if version_2_with_negative:
prelim_predictions.append(
_PrelimPrediction(
feature_index = min_null_feature_index,
start_index = 0,
end_index = 0,
start_logit = null_start_logit,
end_logit = null_end_logit,
)
)
prelim_predictions = sorted(
prelim_predictions,
key = lambda x: (x.start_logit + x.end_logit),
reverse = True,
)
_NbestPrediction = collections.namedtuple(
'NbestPrediction', ['text', 'start_logit', 'end_logit']
)
seen_predictions = {}
nbest = []
for pred in prelim_predictions:
if len(nbest) >= n_best_size:
break
i = pred.feature_index
if pred.start_index > 0:
tok_tokens = tokens[i][pred.start_index : (pred.end_index + 1)]
orig_doc_start = token_to_orig_maps[i][pred.start_index]
orig_doc_end = token_to_orig_maps[i][pred.end_index]
orig_tokens = example.doc_tokens[
orig_doc_start : (orig_doc_end + 1)
]
tok_text = ' '.join(tok_tokens)
tok_text = tok_text.replace(' ##', '')
tok_text = tok_text.replace('##', '')
tok_text = tok_text.strip()
tok_text = ' '.join(tok_text.split())
orig_text = ' '.join(orig_tokens)
final_text = get_final_text(tok_text, orig_text, do_lower_case)
if final_text in seen_predictions:
continue
seen_predictions[final_text] = True
else:
final_text = ''
seen_predictions[final_text] = True
nbest.append(
_NbestPrediction(
text = final_text,
start_logit = pred.start_logit,
end_logit = pred.end_logit,
)
)
if version_2_with_negative:
if '' not in seen_predictions:
nbest.append(
_NbestPrediction(
text = '',
start_logit = null_start_logit,
end_logit = null_end_logit,
)
)
if not nbest:
nbest.append(
_NbestPrediction(
text = 'empty', start_logit = 0.0, end_logit = 0.0
)
)
assert len(nbest) >= 1
total_scores = []
best_non_null_entry = None
for entry in nbest:
total_scores.append(entry.start_logit + entry.end_logit)
if not best_non_null_entry:
if entry.text:
best_non_null_entry = entry
probs = _compute_softmax(total_scores)
nbest_json = []
for (i, entry) in enumerate(nbest):
output = collections.OrderedDict()
output['text'] = entry.text
output['probability'] = probs[i]
output['start_logit'] = entry.start_logit
output['end_logit'] = entry.end_logit
nbest_json.append(output)
assert len(nbest_json) >= 1
if not version_2_with_negative:
all_predictions[example.qas_id] = nbest_json[0]['text']
else:
score_diff = (
score_null
- best_non_null_entry.start_logit
- (best_non_null_entry.end_logit)
)
scores_diff_json[example.qas_id] = score_diff
if score_diff > null_score_diff_threshold:
all_predictions[example.qas_id] = ''
else:
all_predictions[example.qas_id] = best_non_null_entry.text
all_nbest_json[example.qas_id] = nbest_json
with tf.gfile.GFile(output_prediction_file, 'w') as writer:
writer.write(json.dumps(all_predictions, indent = 4) + '\n')
with tf.gfile.GFile(output_nbest_file, 'w') as writer:
writer.write(json.dumps(all_nbest_json, indent = 4) + '\n')
if version_2_with_negative:
with tf.gfile.GFile(output_null_log_odds_file, 'w') as writer:
writer.write(json.dumps(scores_diff_json, indent = 4) + '\n')
to_predict(test_indices, squad_test, starts, ends,
test_tokens, test_token_to_orig_maps, test_token_is_max_contexts)
# !wget https://raw.githubusercontent.com/allenai/bi-att-flow/master/squad/evaluate-v1.1.py
# !python3 evaluate-v1.1.py dev-v1.1.json predictions.json
| squad-qa/1.bert.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/gtbook/robotics/blob/main/S51_diffdrive_intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="JoW4C_OkOMhe" tags=["remove-cell"]
# %pip install -q -U gtbook
# + id="10-snNDwOSuC" tags=["remove-cell"]
# + [markdown] id="nAvx4-UCNzt2"
# # A Mobile Robot With Simple Kinematics
#
# > A simple, differential-drive robot that is similar to a car, but can rotate in place.
#
# **This Section is still in draft mode and was released for adventurous spirits (and TAs) only.**
# + tags=["remove-input"]
from gtbook.display import randomImages
from IPython.display import display
display(randomImages(5, 0, "steampunk", 1))
# -
# In this chapter we introduce a robot with differential drive, i.e, two wheels that can spin in either direction to move the robot forwards, backwards, or turn. In contrast to the previous chapter, we will now treat *orientation* as a first class citizen.
#
# The bulk of this chapter, however, is taken up by a new sensor: the *camera*. The last decade has seen spectacular advances in computer vision powered by new breakthroughs in neural network inference. We discuss image formation, image processing, multilayer perceptrons, and convolution neural networks.
#
# The chapter has the usual organization:
# - 5.1 State: $x$, $y$, *and* $theta$
# - 5.2 Actuators: differential drive
# - 5.3 Sensors: Camera, image formation, image processing
# - 5.4 Perception: MLPs and CNNs
# - 5.5 Planning: Dubbins's vehicles, Sampling-based planning
# - 5.5 Learning: how to train your neural network
| S50_diffdrive_intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="HTLSpCXvOvaV" colab_type="text"
# # Rhythm Estimation and Evaluation of Bambucos
#
# by:
#
# <NAME>
#
# **Project ACMUS:** https://acmus-mir.github.io/
#
#
# ## Installing libraries
# + id="UFHo3hquOnia" colab_type="code" colab={}
import sys
import os
import pandas as pd
import numpy as np
import re
import time
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] =(12,6)
# + id="efijaf17CDGa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} executionInfo={"status": "ok", "timestamp": 1596050770469, "user_tz": 300, "elapsed": 38820, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="40a4ad42-763e-4a7a-8bfe-fbce73391a4a"
# !python -m pip install essentia -q
# !python -m pip install madmom==0.15.0 -q # Package for beat analysis that includes simple spectral flux calculation
# + id="yRsWzrm1RKPO" colab_type="code" colab={}
import madmom
import essentia.standard as es
import madmom.evaluation.beats as be # beat evaluation
# + id="uZoADrFzgnYj" colab_type="code" colab={}
# %load_ext google.colab.data_table
# + [markdown] id="4NTlf-R-Tqzs" colab_type="text"
# ## Get Audiofiles
#
# Audio Files and Annotation are in Zenodo:
#
# https://zenodo.org/record/3965447/files/rhythm_set.zip?download=1
# + id="SAktzEhyBtfJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 215} executionInfo={"status": "ok", "timestamp": 1596050919301, "user_tz": 300, "elapsed": 187639, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="ce1a1e50-81c3-4fbb-cef9-c1c64920536d"
#Download files from Zenodo
# !wget -O rhythm_set.zip https://zenodo.org/record/3965447/files/rhythm_set.zip?download=1
# + id="9mhdLusRCpba" colab_type="code" colab={}
#Unzip file
# !unzip -q rhythm_set.zip -d ./rhythm_set
# + [markdown] id="1NcRI-vbPGZk" colab_type="text"
# ## Get bambucos information
# + id="PUYtJDfWRmtp" colab_type="code" colab={}
audioDirectory = './rhythm_set/Audio/' # Audio files
anotationDirectory='./rhythm_set/Beat_annotations/' # Anotation files
# + id="FKvSWm6XVete" colab_type="code" colab={}
bambuco_set = pd.DataFrame(columns=['audiofile','simple_ann','compound_ann'])
# + id="apdd511lJoTc" colab_type="code" colab={}
rhythm_set = pd.read_csv("./rhythm_set/rhythm_set.csv",sep = ';',encoding = "ISO-8859-1")
# + id="SMmXdllDa5_l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} executionInfo={"status": "ok", "timestamp": 1596050944043, "user_tz": 300, "elapsed": 212327, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="39eaff7c-f821-4871-8b24-79bcfbe61cd6"
bambuco_set['audiofile'] = rhythm_set[rhythm_set['Genre']=='bambuco']['File name'].str.strip()
bambuco_set.reset_index(drop=True, inplace = True)
print(f'Number of Bambuco Files = {len(bambuco_set)}')
# + id="w2OjemuCKYFk" colab_type="code" colab={}
bambuco_set['simple_ann'] = bambuco_set['audiofile'].str.split('.').apply(lambda x: x[0])+'(0).txt'
bambuco_set['compound_ann'] = bambuco_set['audiofile'].str.split('.').apply(lambda x: x[0])+'(1).txt'
# + [markdown] id="7aPTJ16w1qIa" colab_type="text"
#
# ## Get Ground Truth
# + id="nn_HttkIBaQW" colab_type="code" colab={}
def read_ann(file):
'''
function to read .txt annotations file
returns numpy array
'''
beatsItt = list()
with open(os.path.join(anotationDirectory, file), 'r') as f:
lines= [line.split() for line in f]
for line in lines:
beatsItt.append(np.float32(line[0]))
beatsItt=np.asarray(beatsItt)
return beatsItt
# + id="F68TotMG1uEk" colab_type="code" colab={}
bambuco_set['simple_gt'] = bambuco_set['simple_ann'].apply(read_ann)
bambuco_set['compound_gt'] = bambuco_set['compound_ann'].apply(read_ann)
# + [markdown] id="Cye9NT2RTogl" colab_type="text"
# ## Estimate Beats
#
#
# + [markdown] id="x7GqLM6yE_36" colab_type="text"
# ### Madmom Beat Estimation
# + id="OpLAwdoBUOjg" colab_type="code" colab={}
# DownBeat = DB_
Rhythm_madmom = pd.DataFrame(columns=['File_name','Madmom_Beats'])
# Madmom
proc_beats = madmom.features.beats.BeatTrackingProcessor(fps=100)
for num, file in bambuco_set.iterrows():
# Madmom Tempo estimation
act = madmom.features.beats.RNNBeatProcessor()(audioDirectory+file['audiofile'])
# Madmom Beat estimation
beatsMadmom = np.float32(proc_beats(act))
Rhythm_madmom.loc[len(Rhythm_madmom)] = [file['audiofile'], beatsMadmom]
# + [markdown] id="k3ehfEMo_VPh" colab_type="text"
# ### Essentia Beat estimation
# + id="f26oy0qccY0x" colab_type="code" colab={}
Rhythm_ess = pd.DataFrame(columns=['File_name', 'Essentia_Beats', 'Essentia_Bpm', 'Confidence'])
# Essentia
rhythm_extractor = es.RhythmExtractor2013(method="multifeature")
for num, file in bambuco_set.iterrows():
rhythm_extractor = es.RhythmExtractor2013(method="multifeature")
# Essentia estimations
audio = es.MonoLoader(filename=audioDirectory+file['audiofile'])()
try:
bpmEss,beatsEss, beats_confidence, _, _ = rhythm_extractor(audio)
except:
bpmEss = 0
beatsEss = 0
beats_confidence = 0
Rhythm_ess.loc[len(Rhythm_ess)] = [file['audiofile'], beatsEss, bpmEss, beats_confidence]
del audio
# + [markdown] id="d3LTwHQ0rsFk" colab_type="text"
# # Beat Evaluation
#
# + [markdown] id="ky3hqETkBc8R" colab_type="text"
# ## Simple Meter
#
# + [markdown] id="zv1mNMSoBlEW" colab_type="text"
# ### Madmom Evaluation
#
# + id="4fR7Y1smBvtl" colab_type="code" colab={}
# Create a column by evaluation metrics
Rhythm_madmom['fmeasure']=''
Rhythm_madmom['amlc']=''
Rhythm_madmom['amlt']=''
Rhythm_madmom['cmlc']=''
Rhythm_madmom['cmlt']=''
Rhythm_madmom['Eval'] =''
# + id="1UKqir40CHVu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 828} executionInfo={"status": "ok", "timestamp": 1596051432885, "user_tz": 300, "elapsed": 701136, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="e554cbf9-7fb2-46fb-c6f3-8ba011f9308f"
for n, file in bambuco_set.iterrows():
Rhythm_madmom['Eval'][n] = be.BeatEvaluation(Rhythm_madmom['Madmom_Beats'][n], file['simple_gt'])
Rhythm_madmom['fmeasure'][n]= Rhythm_madmom['Eval'][n].fmeasure*100
Rhythm_madmom['amlc'][n]= Rhythm_madmom['Eval'][n].amlc*100
Rhythm_madmom['amlt'][n]= Rhythm_madmom['Eval'][n].amlt*100
Rhythm_madmom['cmlc'][n]= Rhythm_madmom['Eval'][n].cmlc*100
Rhythm_madmom['cmlt'][n]= Rhythm_madmom['Eval'][n].cmlt*100
# + id="qeU-4W05CQMY" colab_type="code" colab={}
Rhythm_madmom[['fmeasure','amlc', 'amlt', 'cmlc', 'cmlt']] = Rhythm_madmom[['fmeasure', 'amlc', 'amlt', 'cmlc', 'cmlt']].apply(pd.to_numeric)
# + [markdown] id="D57V0yNqCflH" colab_type="text"
# #### Results
#
# + id="sk4HJfiECmr3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 257} executionInfo={"status": "ok", "timestamp": 1596051432889, "user_tz": 300, "elapsed": 701133, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="c7d3bd21-d0cd-49bd-f9e6-e4525c274d88"
colu = ['fmeasure', 'amlc', 'amlt', 'cmlc', 'cmlt']
Rhythm_madmom[colu].describe()
# + id="xJ4bf5hyCow_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 110} executionInfo={"status": "ok", "timestamp": 1596051433090, "user_tz": 300, "elapsed": 701329, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="b73ca686-e771-48dc-9787-d0b82453e96b"
pd.DataFrame(Rhythm_madmom[colu].mean()).T
# + [markdown] id="Qlm5q_qNcJb4" colab_type="text"
# #### Boxplot
#
# + id="UZnMyVaLYMDw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 390} executionInfo={"status": "ok", "timestamp": 1596051433335, "user_tz": 300, "elapsed": 701571, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="3bf1f584-d048-4d28-fb3f-fc747cd1e11f"
plt.rcParams["figure.figsize"] =(12,6)
Rhythm_madmom[colu].boxplot()
plt.title('Resultados Madmom Simple metric');
# + [markdown] id="ryuz0WjacaDI" colab_type="text"
# ### Essentia Evaluation
# + colab_type="code" id="YOhGyKlxdQyX" colab={}
# Create a column by evaluation metrics
Rhythm_ess['fmeasure']=''
Rhythm_ess['amlc']=''
Rhythm_ess['amlt']=''
Rhythm_ess['cmlc']=''
Rhythm_ess['cmlt']=''
Rhythm_ess['Eval'] =''
# + colab_type="code" id="oFrFgjnLdQyg" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1596051433793, "user_tz": 300, "elapsed": 702022, "user": {"displayName": "<NAME>", "photoUrl": "<KEY>", "userId": "14263137554151707040"}} outputId="94a79797-6c6e-4728-9391-bfb63c37e699"
for n, file in bambuco_set.iterrows():
Rhythm_ess['Eval'][n] = be.BeatEvaluation(Rhythm_ess['Essentia_Beats'][n], file['simple_gt'])
Rhythm_ess['fmeasure'][n]= Rhythm_ess['Eval'][n].fmeasure*100
Rhythm_ess['amlc'][n]= Rhythm_ess['Eval'][n].amlc*100
Rhythm_ess['amlt'][n]= Rhythm_ess['Eval'][n].amlt*100
Rhythm_ess['cmlc'][n]= Rhythm_ess['Eval'][n].cmlc*100
Rhythm_ess['cmlt'][n]= Rhythm_ess['Eval'][n].cmlt*100
# + colab_type="code" id="V5fR14ZHdQyj" colab={}
Rhythm_ess[colu] = Rhythm_ess[colu].apply(pd.to_numeric)
# + [markdown] colab_type="text" id="sAOreAq2dQyn"
# #### Results
#
# + colab_type="code" id="6wHuqlh8dQyo" colab={"base_uri": "https://localhost:8080/", "height": 257} executionInfo={"status": "ok", "timestamp": 1596051434025, "user_tz": 300, "elapsed": 702244, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="110fbf53-bfc1-4bc8-a46d-bb0e6ab5c9e9"
Rhythm_ess[colu].describe()
# + colab_type="code" id="egaYI4m7dQyr" colab={"base_uri": "https://localhost:8080/", "height": 110} executionInfo={"status": "ok", "timestamp": 1596051434027, "user_tz": 300, "elapsed": 702242, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="483a4529-2f9f-4a66-c08d-51987626a1c8"
pd.DataFrame(Rhythm_ess[colu].mean()).T
# + [markdown] colab_type="text" id="yhknmkVVdQyt"
# #### Boxplot
#
# + colab_type="code" id="Bq0GRDNndQyu" colab={"base_uri": "https://localhost:8080/", "height": 390} executionInfo={"status": "ok", "timestamp": 1596051434256, "user_tz": 300, "elapsed": 702466, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="67cf462f-e3b9-4df9-9264-93ea81183e6c"
plt.rcParams["figure.figsize"] =(12,6)
Rhythm_ess[colu].boxplot()
plt.title('Resultados Essentia Simple metric');
# + [markdown] id="XwcrQPEmp5B4" colab_type="text"
# ## Compound Meter
# + [markdown] colab_type="text" id="iEJ_n51ov3nl"
# ### Madmom Evaluation
#
# + colab_type="code" id="rwBEzRpBv3nm" colab={}
# Create a column by evaluation metrics
Rhythm_madmom['fmeasure_comp']=''
Rhythm_madmom['amlc_comp']=''
Rhythm_madmom['amlt_comp']=''
Rhythm_madmom['cmlc_comp']=''
Rhythm_madmom['cmlt_comp']=''
Rhythm_madmom['Eval_comp'] =''
# + colab_type="code" id="v3607Xb9v3np" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1596051434999, "user_tz": 300, "elapsed": 703199, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/<KEY>", "userId": "14263137554151707040"}} outputId="4b93081f-a4eb-4fed-c060-c81bd1e3c300"
for n, file in bambuco_set.iterrows():
Rhythm_madmom['Eval_comp'][n] = be.BeatEvaluation(Rhythm_madmom['Madmom_Beats'][n], file['compound_gt'])
Rhythm_madmom['fmeasure_comp'][n]= Rhythm_madmom['Eval_comp'][n].fmeasure*100
Rhythm_madmom['amlc_comp'][n]= Rhythm_madmom['Eval_comp'][n].amlc*100
Rhythm_madmom['amlt_comp'][n]= Rhythm_madmom['Eval_comp'][n].amlt*100
Rhythm_madmom['cmlc_comp'][n]= Rhythm_madmom['Eval_comp'][n].cmlc*100
Rhythm_madmom['cmlt_comp'][n]= Rhythm_madmom['Eval_comp'][n].cmlt*100
# + colab_type="code" id="zaMJUI7Rv3ns" colab={}
colu2 = ['fmeasure_comp', 'amlc_comp', 'amlt_comp', 'cmlc_comp', 'cmlt_comp']
Rhythm_madmom[colu2] = Rhythm_madmom[colu2].apply(pd.to_numeric)
# + [markdown] colab_type="text" id="C1GTvhZ1v3nu"
# #### Results
#
# + colab_type="code" id="zBOGf8wiv3nv" colab={"base_uri": "https://localhost:8080/", "height": 257} executionInfo={"status": "ok", "timestamp": 1596051435002, "user_tz": 300, "elapsed": 703193, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="da6fa320-df1f-49ca-bd2f-cf29005ca631"
Rhythm_madmom[colu2].describe()
# + colab_type="code" id="w-mz0gbqv3ny" colab={"base_uri": "https://localhost:8080/", "height": 110} executionInfo={"status": "ok", "timestamp": 1596051435003, "user_tz": 300, "elapsed": 703190, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="1af155e0-e6df-4764-f9bd-765bc962ee86"
pd.DataFrame(Rhythm_madmom[colu2].mean()).T
# + [markdown] colab_type="text" id="DektS3jMv3n1"
# #### Boxplot
#
# + colab_type="code" id="epENEBUov3n1" colab={"base_uri": "https://localhost:8080/", "height": 391} executionInfo={"status": "ok", "timestamp": 1596051435335, "user_tz": 300, "elapsed": 703517, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="37034b72-67fc-48fa-b0aa-1e2b746b160f"
plt.rcParams["figure.figsize"] =(12,6)
Rhythm_madmom[colu2].boxplot()
plt.title('Resultados Madmom Compound metric');
# + [markdown] id="z8h3hh3wFEmp" colab_type="text"
# ### Essentia Evaluation
# + colab_type="code" id="asBeNdQ9FCpw" colab={}
# Create a column by evaluation metrics
Rhythm_ess['fmeasure_comp']=''
Rhythm_ess['amlc_comp']=''
Rhythm_ess['amlt_comp']=''
Rhythm_ess['cmlc_comp']=''
Rhythm_ess['cmlt_comp']=''
Rhythm_ess['Eval_comp'] =''
# + colab_type="code" id="NSVLHkDxFCp3" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1596051436023, "user_tz": 300, "elapsed": 704196, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="9ec26b06-be66-47c1-a257-2f74e93d352d"
for n, file in bambuco_set.iterrows():
Rhythm_ess['Eval_comp'][n] = be.BeatEvaluation(Rhythm_ess['Essentia_Beats'][n], file['compound_gt'])
Rhythm_ess['fmeasure_comp'][n]= Rhythm_ess['Eval_comp'][n].fmeasure*100
Rhythm_ess['amlc_comp'][n]= Rhythm_ess['Eval_comp'][n].amlc*100
Rhythm_ess['amlt_comp'][n]= Rhythm_ess['Eval_comp'][n].amlt*100
Rhythm_ess['cmlc_comp'][n]= Rhythm_ess['Eval_comp'][n].cmlc*100
Rhythm_ess['cmlt_comp'][n]= Rhythm_ess['Eval_comp'][n].cmlt*100
# + colab_type="code" id="-7lXgg25FCp7" colab={}
Rhythm_ess[colu2] = Rhythm_ess[colu2].apply(pd.to_numeric)
# + [markdown] colab_type="text" id="2EbYKG8tFCp-"
# #### Results
#
# + colab_type="code" id="fUfWz3LtFCp-" colab={"base_uri": "https://localhost:8080/", "height": 257} executionInfo={"status": "ok", "timestamp": 1596051436028, "user_tz": 300, "elapsed": 704194, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="efcd8327-b14a-4357-9883-702d65716dbd"
Rhythm_ess[colu2].describe()
# + colab_type="code" id="054k14D7FCqB" colab={"base_uri": "https://localhost:8080/", "height": 110} executionInfo={"status": "ok", "timestamp": 1596051436029, "user_tz": 300, "elapsed": 704192, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="d182175a-ea8f-4ab4-f52f-104c5b147312"
pd.DataFrame(Rhythm_ess[colu2].mean()).T
# + [markdown] colab_type="text" id="_7EX8PnqFCqE"
# #### Boxplot
#
# + colab_type="code" id="91ilQwR_FCqE" colab={"base_uri": "https://localhost:8080/", "height": 391} executionInfo={"status": "ok", "timestamp": 1596051436419, "user_tz": 300, "elapsed": 704578, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="b5c4c97b-eb6d-4139-8a94-208681b4f04d"
plt.rcParams["figure.figsize"] =(12,6)
Rhythm_ess[colu2].boxplot()
plt.title('Resultados Essentia Simple metric');
# + [markdown] id="gIAfzNKfxtr5" colab_type="text"
# # General Results
# + id="HN1RW1KjxwTz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 149} executionInfo={"status": "ok", "timestamp": 1596051436420, "user_tz": 300, "elapsed": 704575, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="ac60584d-b4b7-44b4-bf20-b5f80aef4c5f"
simple = pd.concat([Rhythm_madmom[colu].mean(), Rhythm_ess[colu].mean()], axis=1).T
simple['names'] =['Madmom', 'Essentia']
simple.set_index('names',inplace=True)
print('Simple Meter results')
simple.round(2)
# + id="qgogj7YLIlV7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 149} executionInfo={"status": "ok", "timestamp": 1596051436421, "user_tz": 300, "elapsed": 704572, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Girm_rOLoaP9uyn43LyylGFQKqYLhiEo6XKcPhv_A=s64", "userId": "14263137554151707040"}} outputId="8a915722-60ef-49f6-be3b-a393329a5eeb"
compound = pd.concat([Rhythm_madmom[colu2].mean(), Rhythm_ess[colu2].mean()], axis=1).T
compound['names'] =['Madmom', 'Essentia']
compound.set_index('names',inplace=True)
print('Compound Meter Results')
compound.round(2)
| ISMIR2020/ISMIR2020_Rhythm_Acmus.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:deepLearning]
# language: python
# name: conda-env-deepLearning-py
# ---
# # Data preprocessing
#Create references to important directories we will use over and over
import os, sys
current_dir = os.getcwd()
SCRIPTS_HOME_DIR = current_dir
DATA_HOME_DIR = current_dir+'/data'
DATA_HOME_DIR
from glob import glob
import numpy as np
import _pickle as pickle
import PIL
from PIL import Image
from PIL import ImageOps
from tqdm import tqdm
import bcolz
def resize_32(image_path):
""" chops off the top half of a 32 by 32 image so it is 16 by 16 """
# open image and get array
image_array = np.asarray(Image.open(image_path))
if image_array.shape == (32, 32):
image_array = image_array[16:,:]
img = Image.fromarray(image_array, 'L')
img.save(image_path)
# %cd $DATA_HOME_DIR/train/binary
folders = ([name for name in os.listdir(".") if os.path.isdir(name)])
for folder in tqdm(folders):
os.chdir(DATA_HOME_DIR + '/train/binary/' + folder)
g = glob('*.png')
for image_path in g:
resize_32(image_path)
# %cd $DATA_HOME_DIR/valid/binary
folders = ([name for name in os.listdir(".") if os.path.isdir(name)])
for folder in tqdm(folders):
os.chdir(DATA_HOME_DIR + '/valid/binary/' + folder)
g = glob('*.png')
for image_path in g:
resize_32(image_path)
# ## Average Image
# %cd $DATA_HOME_DIR/train/binary
folders = ([name for name in os.listdir(".") if os.path.isdir(name)])
imgs_all = []
for folder in tqdm(folders):
os.chdir(DATA_HOME_DIR + '/train/binary/' + folder)
g = glob('*.png')
imgs = np.array([np.asarray(Image.open(image_path)) for image_path in g])
img_avg = imgs.sum(axis=0) / imgs.shape[0]
imgs_all.append(img_avg)
from matplotlib.pyplot import imshow
# %matplotlib inline
imshow(imgs_all[0])
imshow(imgs_all[1])
imshow(imgs_all[2])
# ### Color
# %cd $DATA_HOME_DIR/train/color
folders = ([name for name in os.listdir(".") if os.path.isdir(name)])
for folder in tqdm(folders):
os.chdir(DATA_HOME_DIR + '/train/color/' + folder)
g = glob('*.png')
for image_path in g:
image_array = np.asarray(Image.open(image_path))
if image_array.shape == (32, 32, 3):
image_array = image_array[16:,:,:]
img = Image.fromarray(image_array, 'RGB')
img.save(image_path)
def resize_32(image_path):
""" chops off the top half of a 32 by 32 image so it is 16 by 16 """
# open image and get array
image_array = np.asarray(Image.open(image_path))
if image_array.shape == (32, 32):
image_array = image_array[16:,:]
img = Image.fromarray(image_array, 'L')
img.save(image_path)
| line-follower/src/old_lane_follower_past_project/data_preprocessing.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Régression logistique multinomiale
# Nous avons déjà vu la régression logistique binomiale où la variable dépendante est de nature catégorielle avec 2 classes possibles.
#
# Avec une **régression logistique multinomiale**, nous pouvons aussi traiter le cas d'une variable catégorielle avec n'importe quel nombre de classes.
#
# Si notre variable dépendante est de nature ordinale (c'est-à-dire des catégories avec un ordre bien établi), nous allons opter pour une **régression logistique ordinale**.
# Pour calculer une régression logistique multinomiale dans R, nous avons plusieurs choix de fonctions à disposition, dans des libraires différentes. En raison de sa relative simplicité, nous allons utiliser ici la libraire [__nnet__](https://cran.r-project.org/web/packages/nnet) et plus spécifiquement la fonction __multinom__.
#
# Nous pouvons charger la libraire avec
library(nnet)
# et si ça donne une erreur disant que la libraire n'est pas disponible, nous l'installons d'abord avec:
install.packages(c("nnet"))
# Comme exemple concret, nous reprenons le même exemple que celui pour la régression logistique, mais cette fois en utilisant les 3 catégories pour la variable dépendante. Pour rappel, nous essayons de prédire la typologie de la région selon l'une des 3 catégories suivante:
#
# - Centres urbains
# - Espaces sous influence des centres urbains
# - Espaces hors influence des centres urbains
#
# Pour des raisons pratiques, nous avons ajouté une colonne avec un libellé plus court, à savoir "CENTRE", "AGGLO" et "RURAL".
#
# Les variable indépendantes sont:
#
# - **NADHO**: proportion de femmes et hommes de ménage
# - **NADRET**: proportion de personnes actives à la retraite
# - **AD3PRIM**: proportion de personnes actives dans le secteur primaire
# - **AD3SEC**: proportion de personnes actives dans le secteur secondaire
# Il faut encore noter que nos variables dépendantes doivent avoir grosso modo des valeurs entre 0 et 1. Étant donné que nous avons des proportions, ceci est déjà donné. Dans d'autres cas, il faut faire un petit calcul de transformation.
# Comme d'habitude, nous commençons par la lecture du tableau de données:
d = read.csv(file="ch-socioeco-typologie.tsv", sep="\t")
# Nous regardons de plus près la variable dépendante:
summary(d$typologie_ofs_court)
# À partir du jeux de données initial, nous créons un jeu de données de calibrage (_training data_), et un jeu de données de test pour la validation. Le jeux de données de calibrage contient un échantillon aléatoire avec le 90% des données.
# + code_folding=[]
idx = sample(nrow(d), size=0.9*nrow(d))
dtrain = d[idx,]
dtest = d[-idx,]
# -
# Nous pouvons maintenant calculer la régression logistique multinomiale:
regmlogit = multinom(typologie_ofs_court ~ NADHO + NADRET + AD3PRIM + AD3SEC, data=dtrain)
# et puis afficher l'information sur les coefficients, la déviance et l'AIC, comme déjà pour la régression logistique binomiale:
summary(regmlogit)
# Nous avons en fait 2 modèles différents, l'un pour chacune des catégories, sauf pour une catégorie. Ceci est dû au fait que nous estimons des probabilités pour chacune des catégories, et que la somme des probabilités doit obligatoirement donner 1. En conséquence, une fois que nous avons calculé la probabilité pour deux catégories, nous pouvons calculer celle de la dernière catégories avec un simple calcul.
# Pour l'évaluation, nous pouvons faire un tableau comparatif simple entre les catégories prédites et les catégories effectives pour le jeu de données de calibrage et celui de test:
table(predict(regmlogit, newdata=dtrain), dtrain$typologie_ofs_court)
table(predict(regmlogit, newdata=dtest), dtest$typologie_ofs_court)
# Et nous pouvons calculer la proportion de régions classées correctement, pour les deux jeux de données:
accuracy_train = sum(predict(regmlogit, newdata=dtrain) == dtrain$typologie_ofs_court) / nrow(dtrain)
accuracy_test = sum(predict(regmlogit, newdata=dtest) == dtest$typologie_ofs_court) / nrow(dtest)
accuracy_train
accuracy_test
# Nous avons donc environ 77% de réponses justes pour les données de calibration, et 70% pour la validation. C'est ce dernier chiffre qui représente au mieux la qualité du modèle. Notre modèle est donc au mieux médiocre...
# ## NNet ??
# Pour notre régression logistique multinomiale, nous avons utilisé la libraire **[nnet](https://cran.r-project.org/web/packages/nnet)**. Il s'agit d'une libraire pour les réseau de neurones artificiels (Artificial Neural Network ANN). Donc comment ça se fait qu'on utilise une libraire pour les réseaux de neurones si on veut simplement faire une régression linéaire??
#
# En fait, on peut considérer la régression logistique comme étant un cas spécial simple d'un réseau de neurones... Et la fonction que nosu avons utilisé, `multinom` construit automatiquement un réseau de neurones en interne. **Nous avons donc bel et bien utilisé un réseau de neurones...**
| 26-glm/3-logit-multinomiale.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.10 64-bit
# language: python
# name: python3
# ---
# # Data Structures
# ### 1. Iterables and Iterators
#
# An Iterable is a set of data which can be traversed
# For example: List, Dictionary, String
#
# An iterator is a built-in class with functions that allows you to return an object which you an iterate over one value a a time
#
# The function **for** in python, is actually a decorator from other 2 functions: iter(), and next(), to transform it into an iterator
# To do manually an iterator it is neccesary 3 functions:
# * __init__()
# * __iter__()
# * __next__()
class even_numbers:
def __init__(self, max=1000):
self.max = max
def __iter__(self):
self.num = 0
return self
def __next__(self):
if not self.max or self.num <= self.max:
result = self.num
self.num =+ 2
return result
else:
raise StopIteration
# Iterator with fibonacci sequence
# +
import time
class FiboIter():
def __iter__(self):
self.n1 = 0
self.n2 = 1
self.counter = 0
return self
def __next__(self):
if self.counter == 50:
raise StopIteration
else:
if self.counter == 0:
self.counter += 1
return self.n1
elif self.counter == 1:
self.counter += 1
return self.n2
else:
self.aux = self.n1 + self.n2
self.n1, self.n2 = self.n2, self.aux
self.counter += 1
return self.aux
fibonacci = FiboIter()
for element in fibonacci:
print(element)
time.sleep(0.05)
# -
# ### 2. Generators
#
# Generators are a simple way of creating an iterator, the conditions and some differences with iterator are the next
#
# * Instead of creating a class, it is created a function
# * it has the **yield** statement
# +
def count():
a = "First"
yield a
a = "Second"
yield a
a = "Third"
yield a
for i in count():
print(i)
# +
def count():
a = print("One")
yield a
a = print("Two")
yield a
a = print("Three")
yield a
l = count()
print(next(l))
print(next(l))
print(next(l))
# +
import time
def fib(max):
n1 = 0
n2 = 1
counter = 0
while counter <= max:
counter += 1
next = n1 + n2
n1, n2 = n2, next
yield next
if __name__ == "__main__":
for i in fib(1000):
print(i)
time.sleep(0.5)
# -
# ### 3. Sets
#
# A set is an **unordered** collection with no **duplicate** elements. Set objects support mathematical operations.
#
# Curly braces without double dots : or the set() function can be used to create sets. Note: to create an empty set you have to use set(), not {}.
# +
#Defining a set
a = {"World", False, 3.14,}
print(a)
#Converting a list into a set, notice how the repited data is eliminated
b = set([1,2,2,3,3,3,4,4,4,4])
print(b)
#Adding data into a set
a.add(4)
print(a)
#adding a set of data into a set
a.update(["Many", 12.2, True])
print(a)
#Removing data with 2 functions
a.discard("Many")
print(a)
a.remove(12.2)
print(a)
#Removing a random element
a.pop()
print(a)
#To remove everything from a set
a.clear()
print(a)
# -
# Operations with sets
# +
#Defining the two sets
set_1 = {1,2,3,4,5}
set_2 = {3,4,5,6,7}
#Join (function) both seths
set_3 = set_1 | set_2
print(set_3)
#Intersection function, to join only the repetead elements in both sets
set_3 = set_1 & set_2
print(set_3)
#Difference function, exclude the repetead elements, and bring the rest from 1 set
set_3 = set_1 - set_2
print(set_3)
set_3 = set_2 - set_1
print(set_3)
#Symmetric difference, excludes the repetead elements, and brings the rest from both sets
set_3 = set_1 ^ set_2
print(set_3)
#On the other hand we can use the exact functions, with different syntax
set_3 = set_1.union(set_2)
print(set_3)
set_3 = set_1.intersection(set_2)
print(set_3)
set_3 = set_1.difference(set_2)
print(set_3)
set_3 = set_2.difference(set_1)
print(set_3)
set_3 = set_1.symmetric_difference(set_2)
print(set_3)
# -
#Passing an iterable through a set, automatically removes the duplicates
list1 = [1,2,2,3,3,3,4,4,4,4,5,5,5,5,5]
print(set(list1))
| python_projects/advanced_python/advaced_data_structures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Describe Data
# Examine the “gross” or “surface” properties of the acquired data and report on the results.
#
# - Describe the data that has been acquired,
# - including the format of the data,
# - the quantity of data (for example, the number of records and fields in each table),
# - the identities of the fields,
# - and any other surface features which have been discovered.
# - Evaluate whether the data acquired satisfies the relevant requirements.
| {{cookiecutter.proj}}/{{cookiecutter.proj}}/2.Data_Understanding/.ipynb_checkpoints/2) Data description report-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/logodwengo.png" alt="Banner" width="150"/>
# <div>
# <font color=#690027 markdown="1">
# <h1>LINEAIRE REGRESSIE - TOEPASSING TRENDLIJN - DE POST</h1>
# </font>
# </div>
# <div class="alert alert-box alert-success">
# In deze notebook krijg je data voorgeschoteld van de Belgische postbedeling. <br>
# - Je bepaalt de mate van samenhang a.d.h.v. de correlatiecoëfficiënt.<br>
# - Je bepaalt de vergelijking van de regressielijn. <br>
# - Je stelt de gegeven data en de regressielijn duidelijk voor op een grafiek.
# </div>
# ### Opdracht
# In de figuur zie je dat het aantal brieven dat een Belg per jaar verstuurt, sterk gedaald is in de periode 2010-2015. Het aantal verstuurde brieven kent dus een **dalende trend**. Het heeft ertoe geleid dat de postbode niet meer dagelijks je brieven bezorgt.
#
# Je bekijkt het aantal verstuurde brieven per Belg per jaar in functie van het jaartal. <br>
# Haal de data die je nodig hebt uit de figuur. Geef de data in als volgt:
# - de x-waarden komen overeen met het jaartal;
# - de y-waarden komen overeen met het aantal brieven per Belg per jaar.
#
# Let op de verwoording '**in functie van**'! Daaruit leid je af welke grootheid op de x-as en welke op de y-as komt.
#
# Vervolgens bepaal je:
# - de mate van samenhang tussen het jaartal en het aantal brieven a.d.h.v. de correlatiecoëfficiënt;
# - de vergelijking van de **trendlijn**. <br>
#
# Tot slot stel je de gegeven data en de trendlijn duidelijk voor op een grafiek.
#
# Tip: Je kan de voorstelling van de trendlijn verbeteren door extra x-waarden te gebruiken. Zo kan je meegeven hoe de cijfers in de toekomst mogelijk zullen evolueren. Met zo'n trendlijn stel je eigenlijk een wiskundig model op dat je toelaat mogelijke evoluties te bestuderen.
#
# Vergeet niet om de nodige modules te importeren.
# <img src="images/postbode.jpg" alt="Banner" width="400"/>
# <center>Figuur 1: Postbode komt niet meer elke dag aan huis. [1]</center>
# Ga stap voor stap te werk:
# - importeer de nodige modules;
# - geef de data in;
# - visualiseer de data in een spreidingsdiagram;
# - bereken de correlatiecoëfficiënt;
# - geef de zelfgedefinieerde functies in die je nodig hebt voor de lineaire regressie;
# - laat de coëfficiënten berekenen van de trendlijn;
# - laat de y-waarden berekenen van de punten gelegen op de trendlijn;
# - teken de trendlijn erbij op het spreidingsdiagram;
# - gebruik extra x-waarden om de trend naar de toekomst toe te visualiseren.
# <div class="alert alert-box alert-warning">
# In een andere notebook "LineaireRegressieOefeningPostpakken' krijg je de volgende twee opdrachten te vervullen: <br>
# - Doe hetzelfde voor het aantal postpakken dat een Belg jaarlijks verstuurt.<br>
# - Tussen welke grootheden is er het meeste samenhang: tussen het aantal brieven en het jaartal of tussen het aantal postpakken en het jaartal?
# </div>
# # Voorbeeldoplossing
# ### Nodige modules importeren
# <div>
# <font color=#690027 markdown="1">
# <h2>1. Data ingeven en visualiseren</h2>
# </font>
# </div>
# <div>
# <font color=#690027 markdown="1">
# <h2>2. De correlatiecoëfficiënt</h2>
# </font>
# </div>
# Interpretatie:
# <div>
# <font color=#690027 markdown="1">
# <h2>3. Lineaire regressie</h2>
# </font>
# </div>
# <div>
# <font color=#690027 markdown="1">
# <h2>4. Grafiek</h2>
# </font>
# </div>
# <div>
# <h2>Referentielijst</h2>
# </div>
# [1] tlb, mtm. (21 februari 2017). Postbode komt niet meer elke dag aan huis. *Het Nieuwsblad*. https://www.nieuwsblad.be/cnt/dmf20170220_02742099
# <img src="images/cclic.png" alt="Banner" align="left" width="100"/><br><br>
# Notebook Python in wiskunde, zie Computationeel denken - Programmeren in Python van <a href="http://www.aiopschool.be">AI Op School</a>, van <NAME> & <NAME>, in licentie gegeven volgens een <a href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Naamsvermelding-NietCommercieel-GelijkDelen 4.0 Internationaal-licentie</a>.
| Wiskunde/Spreidingsdiagram/1002_LineaireRegressieOefeningPost.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import requests
from bs4 import BeautifulSoup
url = 'https://dct.ntcu.edu.tw/news.php'
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")
a_tags = soup.find_all('a')
for tag in a_tags:
print('網址:'+str(tag.string)+' -> '+tag.get('href'))
# -
| BS4_ntcu.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import numpy as np
import cv2
import os
image_folder="ParticipantScans\\GAN\\LeaveOut_002\\Results\\images\\Cropped"
# +
from local_vars import root_folder
image_fullpath = os.path.join(root_folder, image_folder)
target_file_list = [f for f in os.listdir(image_fullpath) if f.endswith('-targets_cropped.png')]
output_file_list = [f for f in os.listdir(image_fullpath) if f.endswith('-outputs_cropped.png')]
num_target_images = len(target_file_list)
num_output_images = len(output_file_list)
print( "Found {} target image files".format(num_target_images))
print( "Found {} output image files".format(num_output_images))
# +
from sklearn import metrics
def mutualinfo(a,b):
#compute the normalized mutual information score for the two images
#"amount of information" you can tell about one image from the other
a = [item for sublist in a.tolist() for item in sublist]
b = [item for sublist in b.tolist() for item in sublist]
mi = metrics.normalized_mutual_info_score(a, b, average_method='arithmetic')
return mi
# -
def mse(imageA, imageB):
# the 'Mean Squared Error' between the two images is the
# sum of the squared difference between the two images;
# NOTE: the two images must have the same dimension
err = np.sum((imageA.astype("float") - imageB.astype("float")) ** 2)
err /= float(imageA.shape[0] * imageA.shape[1])
# return the MSE, the lower the error, the more "similar"
# the two images are
return err
# +
from skimage import measure
def compare_images(imageA, imageB, display):
# compute the mean squared error, structural similarity, and mutual info
# for the images
m = mse(imageA, imageB)
s = measure.compare_ssim(imageA, imageB)
mi = mutualinfo(imageA, imageB)
if display:
# setup the figure
fig = plt.figure()
plt.suptitle("MSE: %.2f, SSIM: %.2f" % (m, s))
# show first image
ax = fig.add_subplot(1, 2, 1)
plt.imshow(imageA, cmap = plt.cm.gray)
plt.axis("off")
# show the second image
ax = fig.add_subplot(1, 2, 2)
plt.imshow(imageB, cmap = plt.cm.gray)
plt.axis("off")
# show the images
plt.show()
return m, s, mi
# +
print("Reading and comparing files...")
sum_mse = 0
sum_ssim = 0
sum_mi = 0
top5 = [(0, 0, ""), (0, 0, ""), (0, 0, ""), (0, 0, ""), (0, 0, "")]
for i in range(num_output_images):
target_file_name = target_file_list[i]
target_file_fullname = os.path.join(image_fullpath, target_file_name)
output_file_name = output_file_list[i]
output_file_fullname = os.path.join(image_fullpath, output_file_name)
target_image = cv2.imread(target_file_fullname, 0)
output_image = cv2.imread(output_file_fullname, 0)
#target_image = cv2.cvtColor(target_image, cv2.COLOR_BGR2GRAY)
#output_image = cv2.cvtColor(output_image, cv2.COLOR_BGR2GRAY)
m, s, mi = compare_images(target_image, output_image, False)
if i < 5:
top5[i] = (s, i, target_file_name)
else:
top5.sort(key=lambda tup: tup[0])
if s > top5[4][0]:
top5[4] = (s, i, target_file_name)
elif s > top5[3][0]:
top5[3] = (s, i, target_file_name)
elif s > top5[2][0]:
top5[2] = (s, i, target_file_name)
elif s > top5[1][0]:
top5[1] = (s, i, target_file_name)
elif s > top5[0][0]:
top5[0] = (s, i, target_file_name)
#print(top5)
sum_mse += m
sum_ssim += s
sum_mi += mi
average_mse = sum_mse / num_target_images
average_ssim = sum_ssim / num_target_images
average_mi = sum_mi / num_target_images
print("Average mse: {} \nAverage ssim: {} \nAverage mi: {}".format(average_mse,average_ssim, average_mi))
print(top5)
# -
fig = plt.figure(figsize=(18, 5*5))
for i in range(5):
index = top5[i][1]
target_file_name = top5[i][2]
target_file_fullname = os.path.join(image_fullpath, target_file_name)
output_file_name = output_file_list[index]
output_file_fullname = os.path.join(image_fullpath, output_file_name)
target_image = cv2.imread(target_file_fullname, 0)
output_image = cv2.imread(output_file_fullname, 0)
#print(target)
a1 = fig.add_subplot(5,3,i*3+2)
img1 = a1.imshow(target_image.astype(np.float32))
a1.set_title("Ultrasound #{}".format(index))
a2 = fig.add_subplot(5,3,i*3+3)
img2 = a2.imshow(output_image.astype(np.float32))
c2 = fig.colorbar(img2, fraction=0.046, pad=0.04)
| Notebooks/Generative/CompareImages.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: jax0227
# language: python
# name: jax0227
# ---
# # Latent ODE
# This example trains a [Latent ODE](https://arxiv.org/abs/1810.01367).
#
# In this case, it's on a simple dataset of decaying oscillators. That is, 2-dimensional time series that look like:
#
# ```
# xx ***
# ** *
# x* **
# *x
# x *
# * * xxxxx
# * x * xx xx *******
# x x **
# x * x * x * xxxxxxxx ******
# x * x * x * xxx *xx *
# x * xx ** x ** xx
# x * x * x * xx ** xx
# * x * x ** x * xxx
# x * * x * xx **
# x * x * xx xx* ***
# x *x * xxx xxx *****
# x x* * xx
# x xx ******
# xxxxx
# ```
#
# The model is trained to generate samples that look like this.
#
# What's really nice about this example is that we will take the underlying data to be irregularly sampled. We will have different observation times for different batch elements.
#
# Most differential equation libraries will struggle with this, as they usually mandate that the differential equation be solved over the same timespan for all batch elements. Working around this can involve programming complexity like outputting at lots and lots of times (the union of all the observations times in the batch), or mathematical complexities like reparameterising the differentiating equation.
#
# However Diffrax is capable of handling this without such issues! You can `vmap` over
# different integration times for different batch elements.
#
# **Reference:**
#
# ```bibtex
# @incollection{rubanova2019latent,
# title={{L}atent {O}rdinary {D}ifferential {E}quations for {I}rregularly-{S}ampled
# {T}ime {S}eries},
# author={<NAME> Chen, <NAME>. and <NAME>.},
# booktitle={Advances in Neural Information Processing Systems},
# publisher={Curran Associates, Inc.},
# year={2019},
# }
# ```
#
# This example is available as a Jupyter notebook [here](https://github.com/patrick-kidger/diffrax/blob/main/examples/latent_ode.ipynb).
# +
import time
import diffrax
import equinox as eqx
import jax
import jax.nn as jnn
import jax.numpy as jnp
import jax.random as jrandom
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import optax
matplotlib.rcParams.update({"font.size": 30})
# -
# The vector field. Note its overall structure of `scalar * tanh(mlp(y))` which is a good structure for Latent ODEs. (Here the tanh is part of `self.mlp`.)
class Func(eqx.Module):
scale: jnp.ndarray
mlp: eqx.nn.MLP
def __call__(self, t, y, args):
return self.scale * self.mlp(y)
# Wrap up the differential equation solve into a model.
class LatentODE(eqx.Module):
func: Func
rnn_cell: eqx.nn.GRUCell
hidden_to_latent: eqx.nn.Linear
latent_to_hidden: eqx.nn.MLP
hidden_to_data: eqx.nn.Linear
hidden_size: int
latent_size: int
def __init__(
self, *, data_size, hidden_size, latent_size, width_size, depth, key, **kwargs
):
super().__init__(**kwargs)
mkey, gkey, hlkey, lhkey, hdkey = jrandom.split(key, 5)
scale = jnp.ones(())
mlp = eqx.nn.MLP(
in_size=hidden_size,
out_size=hidden_size,
width_size=width_size,
depth=depth,
activation=jnn.softplus,
final_activation=jnn.tanh,
key=mkey,
)
self.func = Func(scale, mlp)
self.rnn_cell = eqx.nn.GRUCell(data_size + 1, hidden_size, key=gkey)
self.hidden_to_latent = eqx.nn.Linear(hidden_size, 2 * latent_size, key=hlkey)
self.latent_to_hidden = eqx.nn.MLP(
latent_size, hidden_size, width_size=width_size, depth=depth, key=lhkey
)
self.hidden_to_data = eqx.nn.Linear(hidden_size, data_size, key=hdkey)
self.hidden_size = hidden_size
self.latent_size = latent_size
# Encoder of the VAE
def _latent(self, ts, ys, key):
data = jnp.concatenate([ts[:, None], ys], axis=1)
hidden = jnp.zeros((self.hidden_size,))
for data_i in reversed(data):
hidden = self.rnn_cell(data_i, hidden)
context = self.hidden_to_latent(hidden)
mean, logstd = context[: self.latent_size], context[self.latent_size :]
std = jnp.exp(logstd)
latent = mean + jrandom.normal(key, (self.latent_size,)) * std
return latent, mean, std
# Decoder of the VAE
def _sample(self, ts, latent):
dt0 = 0.4 # selected as a reasonable choice for this problem
y0 = self.latent_to_hidden(latent)
sol = diffrax.diffeqsolve(
diffrax.ODETerm(self.func),
diffrax.Tsit5(),
ts[0],
ts[-1],
dt0,
y0,
saveat=diffrax.SaveAt(ts=ts),
)
return jax.vmap(self.hidden_to_data)(sol.ys)
@staticmethod
def _loss(ys, pred_ys, mean, std):
# -log p_θ with Gaussian p_θ
reconstruction_loss = 0.5 * jnp.sum((ys - pred_ys) ** 2)
# KL(N(mean, std^2) || N(0, 1))
variational_loss = 0.5 * jnp.sum(mean**2 + std**2 - 2 * jnp.log(std) - 1)
return reconstruction_loss + variational_loss
# Run both encoder and decoder during training.
def train(self, ts, ys, *, key):
latent, mean, std = self._latent(ts, ys, key)
pred_ys = self._sample(ts, latent)
return self._loss(ys, pred_ys, mean, std)
# Run just the decoder during inference.
def sample(self, ts, *, key):
latent = jrandom.normal(key, (self.latent_size,))
return self._sample(ts, latent)
# Toy dataset of decaying oscillators.
#
# By way of illustration we set this up as a differential equation and solve this using Diffrax as well. (Despite this being an autonomous linear ODE, for which a closed-form solution is actually available.)
def get_data(dataset_size, *, key):
ykey, tkey1, tkey2 = jrandom.split(key, 3)
y0 = jrandom.normal(ykey, (dataset_size, 2))
t0 = 0
t1 = 2 + jrandom.uniform(tkey1, (dataset_size,))
ts = jrandom.uniform(tkey2, (dataset_size, 20)) * (t1[:, None] - t0) + t0
ts = jnp.sort(ts)
dt0 = 0.1
def func(t, y, args):
return jnp.array([[-0.1, 1.3], [-1, -0.1]]) @ y
def solve(ts, y0):
sol = diffrax.diffeqsolve(
diffrax.ODETerm(func),
diffrax.Tsit5(),
ts[0],
ts[-1],
dt0,
y0,
saveat=diffrax.SaveAt(ts=ts),
)
return sol.ys
ys = jax.vmap(solve)(ts, y0)
return ts, ys
def dataloader(arrays, batch_size, *, key):
dataset_size = arrays[0].shape[0]
assert all(array.shape[0] == dataset_size for array in arrays)
indices = jnp.arange(dataset_size)
while True:
perm = jrandom.permutation(key, indices)
(key,) = jrandom.split(key, 1)
start = 0
end = batch_size
while start < dataset_size:
batch_perm = perm[start:end]
yield tuple(array[batch_perm] for array in arrays)
start = end
end = start + batch_size
# The main entry point. Try running `main()` to train a model.
def main(
dataset_size=10000,
batch_size=256,
lr=1e-2,
steps=250,
save_every=50,
hidden_size=16,
latent_size=16,
width_size=16,
depth=2,
seed=5678,
):
key = jrandom.PRNGKey(seed)
data_key, model_key, loader_key, train_key, sample_key = jrandom.split(key, 5)
ts, ys = get_data(dataset_size, key=data_key)
model = LatentODE(
data_size=ys.shape[-1],
hidden_size=hidden_size,
latent_size=latent_size,
width_size=width_size,
depth=depth,
key=model_key,
)
@eqx.filter_value_and_grad
def loss(model, ts_i, ys_i, key_i):
batch_size, _ = ts_i.shape
key_i = jrandom.split(key_i, batch_size)
loss = jax.vmap(model.train)(ts_i, ys_i, key=key_i)
return jnp.mean(loss)
@eqx.filter_jit
def make_step(model, opt_state, ts_i, ys_i, key_i):
value, grads = loss(model, ts_i, ys_i, key_i)
key_i = jrandom.split(key_i, 1)[0]
updates, opt_state = optim.update(grads, opt_state)
model = eqx.apply_updates(model, updates)
return value, model, opt_state, key_i
optim = optax.adam(lr)
opt_state = optim.init(eqx.filter(model, eqx.is_inexact_array))
# Plot results
num_plots = 1 + (steps - 1) // save_every
if ((steps - 1) % save_every) != 0:
num_plots += 1
fig, axs = plt.subplots(1, num_plots, figsize=(num_plots * 8, 8))
axs[0].set_ylabel("x")
axs = iter(axs)
for step, (ts_i, ys_i) in zip(
range(steps), dataloader((ts, ys), batch_size, key=loader_key)
):
start = time.time()
value, model, opt_state, train_key = make_step(
model, opt_state, ts_i, ys_i, train_key
)
end = time.time()
print(f"Step: {step}, Loss: {value}, Computation time: {end - start}")
if (step % save_every) == 0 or step == steps - 1:
ax = next(axs)
# Sample over a longer time interval than we trained on. The model will be
# sufficiently good that it will correctly extrapolate!
sample_t = jnp.linspace(0, 12, 300)
sample_y = model.sample(sample_t, key=sample_key)
sample_t = np.asarray(sample_t)
sample_y = np.asarray(sample_y)
ax.plot(sample_t, sample_y[:, 0])
ax.plot(sample_t, sample_y[:, 1])
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel("t")
plt.savefig("latent_ode.png")
plt.show()
main()
| examples/latent_ode.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 4.5 TensorBordXによるネットワークモデルの可視化
#
# - 本ファイルでは、OpenPoseのネットワークモデルをTensorBordで可視化します
#
# # 4.5 学習目標
#
# 1. tensorbordXが動作する環境を構築できるようになる
# 2. OpenPoseNetクラスを対象に、tensorbordXでネットワーク(graph)を可視化するファイルを出力できるようになる
# 3. tensorbordXのgraphファイルをブラウザで描画し、テンソルサイズの確認などができるようになる
#
#
# # 事前準備
#
# tensorbordXを利用するためにtensorbordXとTensorFlowをインストールする必要があります。以下のようにインストールしてください。
#
# pip install tensorflow
#
# pip install tensorboardx
#
# ※ pip install tensorflow tensorboardx でやるとうまくインストールできないようです。。
#
#
# 必要なパッケージのimport
import torch
from utils.openpose_net import OpenPoseNet
# モデルの用意
net = OpenPoseNet()
net.train()
# +
# 1. tensorboardXの保存クラスを呼び出します
from tensorboardX import SummaryWriter
# 2. フォルダ「tbX」に保存させるwriterを用意します
# フォルダ「tbX」はなければ自動で作成されます
writer = SummaryWriter("./tbX/")
# 3. ネットワークに流し込むダミーデータを作成します
batch_size = 2
dummy_img = torch.rand(batch_size, 3, 368, 368)
# 4. OpenPoseのインスタンスnetに対して、ダミーデータである
# dummy_imgを流したときのgraphをwriterに保存させます
writer.add_graph(net, (dummy_img, ))
writer.close()
# 5. コマンドプロンプトを開き、フォルダ「tbX」がある
# フォルダ「4_pose_estimation」まで移動して、
# 以下のコマンドを実行します
# tensorboard --logdir="./tbX/"
# その後、http://localhost:6006
# にアクセスします
# -
# %load_ext tensorboard
# %tensorboard --logdir runs
| 40_pose_estimation/4-5_TensorBoardX.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
file_path="/home/arun/workspace/projects/janaganana-data/data/education_level/C-08/csvs/education_all_india.csv"
# Reading csv file for pca data
import pandas as pd
df = pd.read_csv(file_path)
# -
df.head(4)
# only interested in literacy of all ages
# if age specific is required then additional conditions need to be added here
df = df[((df.total == "Rural") | (df.total == "Urban")) & (df.age_group == "All ages")]
# pick required columns
df=df[['state_code', 'dist_code', 'total', 'area_name', 'lit_wo_level_m', 'lit_wo_level_f', 'below_prim_m', 'below_prim_f', 'prim_m', 'prim_f', 'middle_m', 'middle_f', 'secondary_m', 'secondary_f', 'inter_puc_m', 'inter_puc_f', 'nontech_dipl_m','nontech_dipl_f', 'tech_dipl_m', 'tech_dipl_f', 'grad_above_m', 'grad_above_f', 'unclassfd_m', 'unclassfd_f']]
# +
def func(ds):
ds.area_name = ds.area_name.lower();
if ds.area_name.startswith('district'):
ds['geo_code'] = ds.dist_code;
else:
ds['geo_code'] = ds.state_code;
if ds.area_name == 'india':
ds['geo_level'] = 'country'
ds['geo_code'] = 'IN'
elif ds.area_name.startswith('district'):
ds['geo_level'] = 'district'
elif ds.area_name.startswith('state'):
ds['geo_level'] = 'state'
return ds
df=df.apply(func, axis=1)
# -
df.head(4)
df.columns
df = df[['geo_code', 'geo_level', 'total', 'lit_wo_level_m', 'lit_wo_level_f', 'below_prim_m', 'below_prim_f', 'prim_m', 'prim_f', 'middle_m', 'middle_f', 'secondary_m', 'secondary_f', 'inter_puc_m', 'inter_puc_f', 'nontech_dipl_m','nontech_dipl_f', 'tech_dipl_m', 'tech_dipl_f', 'grad_above_m', 'grad_above_f', 'unclassfd_m', 'unclassfd_f']]
df.head(4)
# +
# isloate 'Male' data
df1 = df[['geo_code', 'geo_level', 'total', 'lit_wo_level_m', 'below_prim_m', 'prim_m', 'middle_m', 'secondary_m', 'inter_puc_m', 'nontech_dipl_m', 'tech_dipl_m', 'grad_above_m', 'unclassfd_m']]
df1['Sex'] = 'Male';
df1.rename(columns={'total':'area_type'}, inplace=True)
df1.rename(columns={'lit_wo_level_m':'literate_without_level'}, inplace=True)
df1.rename(columns={'below_prim_m':'below_primary'}, inplace=True)
df1.rename(columns={'prim_m':'primary'}, inplace=True)
df1.rename(columns={'middle_m':'middle'}, inplace=True)
df1.rename(columns={'secondary_m':'secondary_matric'}, inplace=True)
df1.rename(columns={'inter_puc_m':'intermediate_puc'}, inplace=True)
df1.rename(columns={'nontech_dipl_m':'nontech_diploma_or_degree'}, inplace=True)
df1.rename(columns={'tech_dipl_m':'tech_diploma_or_degree'}, inplace=True)
df1.rename(columns={'grad_above_m':'graduate_above'}, inplace=True)
df1.rename(columns={'unclassfd_m':'unclassified'}, inplace=True)
# Extract each education level and add rows corresponding to each value
dmale_level = pd.melt(df1, id_vars=['geo_code', 'geo_level', 'area_type', 'Sex'], var_name="edu_level", value_name="level_total")
# +
df2 = df[['geo_code', 'geo_level', 'total', 'lit_wo_level_f', 'below_prim_f', 'prim_f', 'middle_f', 'secondary_f', 'inter_puc_f', 'nontech_dipl_f', 'tech_dipl_f', 'grad_above_f', 'unclassfd_f']]
df2['Sex'] = 'Female';
df2.rename(columns={'total':'area_type'}, inplace=True)
df2.rename(columns={'lit_wo_level_f':'literate_without_level'}, inplace=True)
df2.rename(columns={'below_prim_f':'below_primary'}, inplace=True)
df2.rename(columns={'prim_f':'primary'}, inplace=True)
df2.rename(columns={'middle_f':'middle'}, inplace=True)
df2.rename(columns={'secondary_f':'secondary_matric'}, inplace=True)
df2.rename(columns={'inter_puc_f':'intermediate_puc'}, inplace=True)
df2.rename(columns={'nontech_dipl_f':'nontech_diploma_or_degree'}, inplace=True)
df2.rename(columns={'tech_dipl_f':'tech_diploma_or_degree'}, inplace=True)
df2.rename(columns={'grad_above_f':'graduate_above'}, inplace=True)
df2.rename(columns={'unclassfd_f':'unclassified'}, inplace=True)
dfemale_level = pd.melt(df2, id_vars=['geo_code', 'geo_level', 'area_type', 'Sex'], var_name="edu_level", value_name="level_total")
# -
# combine both sets
result = pd.concat([dmale_level, dfemale_level])
result.head(5)
output_path="/home/arun/workspace/projects/janaganana-data/data/education_level/C-08/csvs/education_all_india_output.csv"
result.to_csv(output_path, index=False)
| data/Jupyter-Note Books/education_level_all_india.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dpm-coursework
# language: python
# name: dpm-coursework
# ---
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from hottbox.core import Tensor, TensorTKD, residual_tensor
from hottbox.algorithms.decomposition import HOSVD, HOOI
from coursework.data import get_image, plot_tensors
np.random.seed(0)
# + [markdown] toc-hr-collapsed=false
# [Return to Table of Contents](./0_Table_of_contents.ipynb)
# -
# # Tucker Decomposition
#
# <img src="./imgs/TensorTKD.png" alt="Drawing" style="width: 500px;"/>
#
# In previous [assignment](./2_Efficient_representation_of_multidimensional_arrays.ipynb), you have been provided materials which cover efficient representations of mutlidimensional arrays of data, such as the Tucker form. In this module, you will take a closer look at it and the assiciated computational methods.
#
#
# Any tensor of arbitrarily large order can be decomposed in the Tucker form. As illustrated above, a tensor $\mathbf{\underline{X}} \in \mathbb{R}^{I \times J \times K}$ can be represented as a dense core tensor $\mathbf{\underline{G}}$ and a set of factor matrices $\mathbf{A} \in \mathbb{R}^{I \times Q}, \mathbf{B} \in \mathbb{R}^{J \times R}$ and $\mathbf{C} \in
# \mathbb{R}^{K \times P}$
#
# $$
# \mathbf{\underline{X}} = \mathbf{\underline{G}} \times_1 \mathbf{A} \times_2 \mathbf{B} \times_3 \mathbf{C} = \Big[ \mathbf{\underline{G}} ; \mathbf{A}, \mathbf{B}, \mathbf{C} \Big]
# $$
#
#
# On practice, there exist several computational methods to accomplish this all of which are combined into a Tucker Decomposition framework. The two most commonly used algorithms are:
# 1. Higher Order Singular Value Decomposition ([HOSVD](#Higher-Order-Singular-Value-Decomposition-(HOSVD)))
# 1. Higher Order Orthogonal Iteration ([HOOI](#Higher-Order-Orthogonal-Iteration-(HOOI)))
#
# + [markdown] toc-hr-collapsed=false
# # Higher Order Singular Value Decomposition (HOSVD)
#
# The HOSVD is a special case of the Tucker decomposition, in which all the factor matrices are constrained to be orthogonal. They are computed as truncated version of the left singular matrices of all possible mode-$n$ unfoldings of tensor $\mathbf{\underline{X}}$:
#
# $$
# \begin{aligned}
# \mathbf{X}_{(1)} &= \mathbf{U}_1 \mathbf{\Sigma}_1 \mathbf{V}_1^T \quad \rightarrow \quad \mathbf{A} = \mathbf{U}_1[1:R_1]\\
# \mathbf{X}_{(2)} &= \mathbf{U}_2 \mathbf{\Sigma}_2 \mathbf{V}_2^T \quad \rightarrow \quad \mathbf{B} = \mathbf{U}_2[1:R_2] \\
# \mathbf{X}_{(3)} &= \mathbf{U}_3 \mathbf{\Sigma}_3 \mathbf{V}_3^T \quad \rightarrow \quad \mathbf{C} = \mathbf{U}_3[1:R_3] \\
# \end{aligned}
# $$
#
# After factor matrices are obtained, the core tensor $\mathbf{\underline{G}}$ is computed as
#
# $$
# \mathbf{\underline{G}} = \mathbf{\underline{X}} \times_1 \mathbf{A}^T \times_2 \mathbf{B}^T \times_3 \mathbf{C}^T
# $$
#
# -
# # Higher Order Orthogonal Iteration (HOOI)
#
# HOOI algorithm is another special case of the Tuker decomposition. Like HOSVD, it decomposes a tensor into a dense core tensor and orthogonal factor matrices. The difference between the two lies in the fact that in HOOI the factor matrices are optimized iteratively using an Alternating Least Squares (ALS) approach. In other words, the tucker representation $[ \mathbf{\underline{G}};\mathbf{A}^{(1)}, \mathbf{A}^{(2)}, \cdots,\mathbf{A}^{(N)} ]$ of the given tensor $\mathbf{\underline{X}}$ is obtained through the HOOI as follows
#
# $$
# \begin{aligned}
# &\mathbf{\underline{Y}} = \mathbf{\underline{X}} \times_1 \mathbf{A}^{(1)T} \times_2 \cdots \times_{n-1} \mathbf{A}^{(n-1)T} \times_{n+1} \mathbf{A}^{(n+1)} \times \cdots \times_N \mathbf{A}^{(N)} \\
# &\mathbf{A}^{(n)} \leftarrow R_n \text{ leftmost singular vectors of } \mathbf{Y}_{(n)}
# \end{aligned}
# $$
#
# The above is repeated until convergence, then the core tensor $\mathbf{\underline{G}} \in \mathbb{R}^{R_1 \times R_2 \times \cdots \times R_N}$ is computed as
#
# $$
# \mathbf{\underline{G}} = \mathbf{\underline{X}} \times_1 \mathbf{A}^{(1)T} \times_2 \mathbf{A}^{(2)T} \times_3 \cdots \times_N \mathbf{A}^{(N)T}
# $$
# + [markdown] toc-hr-collapsed=false
# # Multi-linear rank
#
# The **multi-linear rank** of a tensor $\mathbf{\underline{X}} \in \mathbb{R}^{I_1 \times \cdots \times I_N}$ is the $N$-tuple $(R_1, \dots, R_N)$ where each $R_n$ is the rank of the subspace spanned by mode-$n$ fibers, i.e. $R_n = \text{rank} \big( \mathbf{X}_{(n)} \big)$. Thus, for our order-$3$ tensor the multi-linear rank is $(R_1, R_2, R_3)$. Multi-linear rank provides flexibility in compression and approximation of the original tensor.
#
# > **NOTE:** For a tensor of order $N$ the values $R_1, R_2, \dots , R_N$ are not necessarily the same, whereas, for matrices (tensors of order 2) the equality $R_1 = R_2$ always holds, where $R_1$ and $R_2$ are the matrix column rank and row rank respectively.
#
#
# -
# # Performing tensor decomposition
# +
# Create tensor
I, J, K = 5, 6, 7
array_3d = np.random.rand(I * J * K).reshape((I, J, K)).astype(np.float)
tensor = Tensor(array_3d)
# Initialise algorithm
algorithm = HOSVD()
# Perform decomposing for selected multi-linear rank
ml_rank = (4, 5, 6)
tensor_tkd = algorithm.decompose(tensor, ml_rank)
# Result preview
print(tensor_tkd)
print('\n\tFactor matrices')
for mode, fmat in enumerate(tensor_tkd.fmat):
print('Mode-{} factor matrix is of shape {}'.format(mode, fmat.shape))
print('\n\tCore tensor')
print(tensor_tkd.core)
# -
# # Evaluation and reconstruction
#
# Tucker representation of an original tensor is almost always an approximation, regardless of which algorithm has been employed for performing decomposition. Thus, relative error of approximation is commonly used in order to evaluate performance of computational methods, i.e. the ratio between a Frobenious norms of residual and original tensors.
# +
# Compute residual tensor
tensor_res = residual_tensor(tensor, tensor_tkd)
# Compute error of approximation
rel_error = tensor_res.frob_norm / tensor.frob_norm
print("Relative error of approximation = {}".format(rel_error))
# -
# ## **Assigment 1**
#
# 1. Create a tensor of order 4 with sizes of each mode being defined by prime numbers and obtain a Tucker representation using HOOI algorithm with multi-linear (4, 10, 6, 2). Then calculation ratio between the number of elements in the original tensor and its Tucker form.
#
# 1. For a tensor that consists of 1331 elements, which multi-linear rank guarantees a perfect reconstruction from its Tucker form and why. Is such choice reasonable for practical applications?
#
# ### Solution: Part 1
# Create a tensor
I, J, K, L= 5, 11, 7, 3
array_4d = np.random.rand(I * J * K * L).reshape((I, J, K, L)).astype(np.float)
tensor = Tensor(array_4d)
# +
# Perform decomposition
algorithm = HOOI()
ml_rank = (4, 10, 6, 2)
tensor_tkd = algorithm.decompose(tensor, ml_rank)
print(tensor_tkd)
print('\n\tFactor matrices')
for mode, fmat in enumerate(tensor_tkd.fmat):
print('Mode-{} factor matrix is of shape {}'.format(mode, fmat.shape))
print('\n\tCore tensor')
print(tensor_tkd.core)
# +
# Print ratio
tensor_res = residual_tensor(tensor, tensor_tkd)
# Compute error of approximation
rel_error = tensor_res.frob_norm / tensor.frob_norm
print("Relative error of approximation = {}".format(rel_error))
# -
# ### Solution: Part 2
# **Conclusion**:
# For a tensor consists of 1331 elements which can maximumly decompose into three order (11, 11, 11), the perfect reconstruction can be guaranteed if the multi-linear rank is (11, 11, 11) as same as the original size.
#
# **Reason**:
# The rank represents different base features of tensor in each mode, and is equal to the size of tensor if it is full rank of each mode. When the multi-linear rank is same as the original tensor rank, all features are taking into accout in approximation, resulting in a much small error between approximation and original tensor.
#
# **Unreasonable**:
# Although the reconstruction error is small, it still unpractical to make the multi-linear rank equal to original tensor size in real applications. The aim of the multi-linear rank is to compress the orginal tensor, in order to reduce the computional complexity. If the order and dimension of orginal tensor is large, there is no improvment in reducing computional cost.
# # Application: Image compression
#
# Color images can be naturally represented as a tensor of order three with the shape `(height x width x channels)` where channels are, for example, Red, Blue and Green (RGB)
#
# <img src="./imgs/image_to_base_colors.png" alt="Drawing" style="width: 500px;"/>
#
# By keeping its original structure, allows to apply methods from multi-linear analysis. For instance, we can employ algorithms for Tucker decompositions in order to compress oringinal informaiton by varying values of desired multi-linear rank.
#
# ```python
# # Get data in form of a Tensor
# car = get_image(item="car", view="top")
# tensor = Tensor(car)
#
# # Initialise algorithm and preform decomposition
# algorithm = HOSVD()
# tensor_tkd = algorithm.decompose(tensor, rank=(25, 25, 3))
#
# # Evaluate result
# tensor_res = residual_tensor(tensor, tensor_tkd)
# rel_error = tensor_res.frob_norm / tensor.frob_norm
#
# print("Relative error of approximation = {}".format(rel_error))
# ```
#
# When can also visually inspect image obtained by reconstructing the Tucker representation
# ```python
# # Reconstruction
# tensor_rec = tensor_tkd.reconstruct()
#
# # Plot original and reconstructed images side by side
# plot_tensors(tensor, tensor_rec)
# ```
#
# <img src="./imgs/car_orig_vs_reconstructed_25_25_3.png" alt="Drawing" style="width: 500px;"/>
# ## **Assigment 2**
# For this assignment you are provided with function `get_image()` which requires two parameters: `item` and `view`. The valid values for former are **car** and **apple**, while the latter takes only **side** and **top**.
#
# 1. Use multi-linear rank equal to `(50, 50, 2)` in order to obtain Tucker representations of images of the car and apple. Analyse results by visually inspecting their reconstructions.
#
# 1. Use multi-linear rank equal to `(50, 50, 2)` in order to obtain Tucker representations of images of the apple taken from the top and from the side. Analyse results by visually inspecting their reconstructions.
#
# 1. What would happen to the reconstruction if the value of multi-linear rank corresponding to the channel mode is decreased to 1.
#
# ### Solution: Part 1
# Create tensors from images
car = get_image(item="car", view="top")
tensor_car = Tensor(car)
apple = get_image(item="apple", view="top")
tensor_apple = Tensor(apple)
# Perform decomposition
algorithm = HOSVD()
tensor_tkd_car = algorithm.decompose(tensor_car, rank=(50, 50, 2))
tensor_tkd_apple = algorithm.decompose(tensor_apple, rank=(50, 50, 2))
# +
# Evaluate results
tensor_res_car = residual_tensor(tensor_car, tensor_tkd_car)
tensor_res_apple = residual_tensor(tensor_apple, tensor_tkd_apple)
rel_error_car = tensor_res_car.frob_norm / tensor_car.frob_norm
rel_error_apple = tensor_res_apple.frob_norm / tensor_apple.frob_norm
print("Relative error of approximation of car = {}".format(rel_error_car))
tensor_rec_car = tensor_tkd_car.reconstruct()
plot_tensors(tensor_car, tensor_rec_car)
print("Relative error of approximation of apple = {}".format(rel_error_apple))
tensor_rec_apple = tensor_tkd_apple.reconstruct()
plot_tensors(tensor_apple, tensor_rec_apple)
# -
# **Include your explanations here**
#
# The original size of image is (128, 128, 3). When using (50, 50, 2) multi-linear rank, both car and apple reconstructed images perform impressively with smamll relative error. However, the differencen between orignal and reconstructed images can still be distinguishable by eye when observing the plotted image. The car reconstructed image is in pink since the mainly proportion of color are red and blue. As to the apple, the red parts are cover the by the main yellow color.
# ### Solution: Part 2
# Create tensors from images
top = get_image(item="apple", view="top")
tensor_top = Tensor(top)
side = get_image(item="apple", view="side")
tensor_side = Tensor(side)
# Perform decomposition
algorithm = HOSVD()
tensor_tkd_top = algorithm.decompose(tensor_top, rank=(50, 50, 2))
tensor_tkd_side = algorithm.decompose(tensor_side, rank=(50, 50, 2))
# +
# Evaluate results
tensor_res_top = residual_tensor(tensor_top, tensor_tkd_top)
tensor_res_side = residual_tensor(tensor_side, tensor_tkd_side)
rel_error_top = tensor_res_top.frob_norm / tensor_top.frob_norm
rel_error_side = tensor_res_side.frob_norm / tensor_side.frob_norm
print("Relative error of approximation of top = {}".format(rel_error_top))
tensor_rec_top = tensor_tkd_top.reconstruct()
plot_tensors(tensor_top, tensor_rec_top)
print("Relative error of approximation of side = {}".format(rel_error_side))
tensor_rec_side = tensor_tkd_side.reconstruct()
plot_tensors(tensor_side, tensor_rec_side)
# -
# **Include your explanations here**
#
# Comparing the relative error of top and side, the side reconstructed image perform perior with lower error. Inspecting these images, the top view image consists of large part of yellow, four corner in blue and little part in red. Therefor, wrong approximation of red part matters less. While the side view image is gradient color form red to yellow, which is hard to reconstruct.
# ### Solution: Part 3
# **Include your explanations here**
#
# The origninal image channel is three corresponding to R, G, B. If use one channel rank to decompose, the reconstructed image is grey-level image of the original whose value ranges from 0 to 255.
| matlab_code/Part5/zh3718_File_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/yoonputer/Team_Project2/blob/master/Deeplearning/stopwords.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="E308-cxIoHC9" outputId="477fea9f-9e6d-4c8a-b6ab-61faf20d08f1"
# ! ls ./drive/MyDrive/Forkspoon/texts_over.txt
# + colab={"base_uri": "https://localhost:8080/"} id="9jA0ahFJAIzC" outputId="9b37149b-f062-44ea-f958-bf4808cb5ec0"
# !python -m pip install konlpy
# + colab={"base_uri": "https://localhost:8080/"} id="pyCRsyA9APmC" outputId="f6f92394-22b1-45ae-846c-e89e645a62c6"
# !curl -O https://raw.githubusercontent.com/konlpy/konlpy/master/scripts/mecab.sh
# + colab={"base_uri": "https://localhost:8080/"} id="6jSpdZbVASRH" outputId="1f9bf6c7-6cee-489a-f4ef-48d04bba54af"
# !source ./mecab.sh
# + [markdown] id="YW6B-CFYodo7"
# # koreanstopwords100
#
# 불용어 사전만들어 DB.sqlite3 저장
# + id="-SCGvMphqDE3"
import sqlite3
import pandas as pd
import numpy as np
# + colab={"base_uri": "https://localhost:8080/", "height": 417} id="gQlm6pG6oPhe" outputId="ec083b3e-8e58-44d8-dd09-1f3f637acb08"
data = pd.read_table('./drive/MyDrive/Forkspoon/koreanstopwords100.txt',header=None, names=['words','1','2'])
data['words']
df_data = pd.DataFrame(data['words'])
df_data
# + colab={"base_uri": "https://localhost:8080/"} id="5W21s3AaoVIh" outputId="7c1eead7-f03d-4206-9ae6-ad2fb3add078"
data2 = pd.read_table('./drive/MyDrive/Forkspoon/texts_over.txt',header=None,)
data2_series = pd.Series(data2[0], index=data2.index)
type(data2_series)
# + id="whYkJmSRAFyD"
from konlpy.tag import Mecab
mecab= Mecab()
# + colab={"base_uri": "https://localhost:8080/"} id="_5lBhGfTAk1n" outputId="7043a5fc-4867-4528-c8bc-753917c24da3"
# mecab.pos(data2_series[0])
data2_series_list= mecab.morphs(data2_series[0])
print(data2_series_list)
# + id="M92SaboxAks-"
df = pd.DataFrame(data2_series_list)
df.columns=['words']
# + colab={"base_uri": "https://localhost:8080/", "height": 417} id="8ZG16mpkAklg" outputId="5517e1f7-4a5a-4ee7-a311-3a9286b738e8"
df
# + id="NUpBUrcUs_Uy"
connect = sqlite3.connect('./db.sqlite3')
# + colab={"base_uri": "https://localhost:8080/", "height": 172} id="d-Yfu29zFYUS" outputId="3014fb6c-e301-425b-ff05-27da7f6ca834"
df_s = pd.read_sql_query('select * from stopwords',connect)
df_s.describe()
# + id="P49oKZVdE3RZ"
df_data.to_sql('stopwords', connect, if_exists='append',index=False)
# + id="zCK1jRIcFH5s"
df.to_sql('stopwords', connect, if_exists='append',index=False)
# + id="qy9jxrtDFlgW"
| Deeplearning/stopwords.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Ensemble methods
#
# The goal of ensemble methods is to combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator.
#
# Two families of ensemble methods are usually distinguished:
# - In averaging methods, the driving principle is to build several estimators independently and then to average their predictions. On average, the combined estimator is usually better than any of the single base estimator because its variance is reduced. So, some examples are: Bagging methods, Forests of randomized trees, but still exist more classifiers;
# - Instead, in boosting methods, base estimators are built sequentially and one tries to reduce the bias of the combined estimator. The motivation is to combine several weak models to produce a powerful ensemble. Hence, some examples are: AdaBoost, Gradient Tree Boosting,but still exist more options.
#
# ## Random Forests
#
# | Learning Technique | Type of Learner | Type of Learning | Classification | Regression | Ensemble Family |
# | --- | --- | --- | --- | --- | --- |
# | *RandomForest* | *Ensemble Method (Meta-Estimator)* | *Supervised Learning* | *Supported* | *Supported* | *Averaging Methods* |
#
# The **sklearn.ensemble module** includes two averaging algorithms based on randomized decision trees: the RandomForest algorithm and the Extra-Trees method. Both algorithms are perturb-and-combine techniques, specifically designed for trees. This means a diverse set of classifiers is created by introducing randomness in the classifier construction. The prediction of the ensemble is given as the averaged prediction of the individual classifiers.
#
# In random forests (see RandomForestClassifier and RandomForestRegressor classes), each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training set.
#
# The main parameters to adjust when using these methods is *number of estimators* and *maxima features*. The former is the number of trees in the forest. The larger the better, but also the longer it will take to compute. In addition, note that results will stop getting significantly better beyond a critical number of trees. The latter is the size of the random subsets of features to consider when splitting a node. The lower the greater the reduction of variance, but also the greater the increase in bias.
#
# Empirical good default values are maxima features equals to null, that means always considering all features instead of a random subset, for regression problems, and maxima features equals to "sqrt", using a random subset of size sqrt(number of features)) for classification tasks, where number of features is the number of features in the data. The best parameter values should always be cross-validated.
#
# We note that the size of the model with the default parameters is $O( M * N * log (N) )$, where $M$ is the number of trees and $N$ is the number of samples.
from utils.all_imports import *;
# %matplotlib inline
# Set seed for notebook repeatability
np.random.seed(0)
# +
# READ INPUT DATASET
# =========================================================================== #
dataset_path, dataset_name, column_names, TARGET_COL = get_dataset_location()
estimators_list, estimators_names = get_estimators()
dataset, feature_vs_values = load_brdiges_dataset(dataset_path, dataset_name)
# -
columns_2_avoid = ['ERECTED', 'LENGTH', 'LOCATION']
# Make distinction between Target Variable and Predictors
# --------------------------------------------------------------------------- #
rescaledX, y, columns = prepare_data_for_train(dataset, target_col=TARGET_COL)
# +
# Parameters to be tested for Cross-Validation Approach
# -----------------------------------------------------
# Array used for storing graphs
plots_names = list(map(lambda xi: f"{xi}_learning_curve.png", estimators_names))
pca_kernels_list = ['linear', 'poly', 'rbf', 'cosine', 'sigmoid']
cv_list = list(range(10, 1, -1))
param_grids = []
parmas_random_forest = {
'n_estimators': list(range(2, 10)),
'criterion':('gini', 'entropy'),
'bootstrap': (True, False),
'min_samples_leaf': list(range(1,5)),
'max_features': (None, 'sqrt', 'log2'),
'max_depth': (None, 3, 5, 7, 10,),
'class_weight': (None, 'balanced', 'balanced_subsample'),
}; param_grids.append(parmas_random_forest)
# Some variables to perform different tasks
# -----------------------------------------------------
N_CV, N_KERNEL, N_GS = 9, 5, 6;
nrows = N_KERNEL // 2 if N_KERNEL % 2 == 0 else N_KERNEL // 2 + 1;
ncols = 2; grid_size = [nrows, ncols]
# -
n_components=9
learning_curves_by_kernels(
# learning_curves_by_components(
estimators_list[:], estimators_names[:],
rescaledX, y,
train_sizes=np.linspace(.1, 1.0, 10),
n_components=9,
pca_kernels_list=pca_kernels_list[0],
verbose=0,
by_pairs=True,
savefigs=True,
scoring='accuracy',
figs_dest=os.path.join('figures', 'learning_curve', f"Pcs_{n_components}"), ignore_func=True,
# figsize=(20,5)
)
# + language="javascript"
# IPython.OutputArea.prototype._should_scroll = function(lines) {
# return false;
# }
# -
# | Learning Technique | Type of Learner | Type of Learning | Classification | Regression | Ensemble Family |
# | --- | --- | --- | --- | --- | --- |
# | *RandomForest* | *Ensemble Method (Meta-Estimator)* | *Supervised Learning* | *Supported* | *Supported* | *Averaging Methods* |
# +
plot_dest = os.path.join("figures", "n_comp_9_analysis", "grid_search")
X = rescaledX
df_gs, df_auc_gs = grid_search_all_by_n_components(
estimators_list=estimators_list[6], \
param_grids=param_grids[0],
estimators_names=estimators_names[6], \
X=X, y=y,
n_components=9,
random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False)
df_9, df_9_auc = df_gs, df_auc_gs
# -
# Looking at the results obtained running *RandomForest Classifier* against our dataset splitted into training set and test set and adopting a different kernel trick applied to *kernel-Pca* unsupervised preprocessing method we can state generally speaking that all the such a *Statistical Learning technique* leads to a sequence of results...
#
# - speaking about __Linear kernel Pca based RandomForest Classifier__, when adoping the default threshold of *.5* for classification purposes we have a model that reaches an accuracy of *%* at test time against an accuracy of *%* at train step, while the Auc score reaches a value of *%* with a Roc Curve that shows a behavior for which the model...
#
# - observing __Polynomial kernel Pca based RandomForest Estimator__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *%* at test time against an accuracy of *%* at train step, while the Auc score reaches a value of *%*...
#
# - review __Rbf kernel Pca based RandomForest Classifier__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *%* at test time against an accuracy of *%* at train step, while the Auc score reaches a value of *%*...
#
# - looking at __Cosine kernel Pca based RandomForest Classifier__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *%* at test time against an accuracy of *%* at train step, while the Auc score reaches a value of *%*.
#
# - finally, referring to __Sigmoid kernel Pca based RandomForest Model__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *%* at test time against an accuracy of *%* at train step, while the Auc score reaches a value of *%*.
create_widget_list_df([df_gs, df_auc_gs]) #print(df_gs); print(df_auc_gs)
# Looking at the table dispalyed just above that shows the details about the selected values for hyper-parameters specified during grid search, in the different situations accordingly to the fixed kernel-trick for kernel Pca unsupervised method we can state that, referring to the first two columns of *Train and Test Accuracy*, we can recognize which trials lead to more overfit results such as for *Cosine, and Sigmoid Tricks* or less overfit solution such as in the case of *Linear, Polynomial, and Rbf Trick*. Speaking about the hyper-parameters, we can say what follows:
#
# - looking at __n_estimators hyper-parameter__, which refers to the number of trees in the forest, ...
#
# - reviewing __criterion parameter__, whcich represents the *function to measure the quality of a split* where supported criteria are *“gini”* for the Gini impurity and *“entropy”* for the information gain and this parameter is *tree-specific*...
#
# - speaking about __bootstrap hyper-parameter__, we know that enabling it *bootstrap samples* are used when building trees, otherwise if disabled, the whole dataset is used to build each tree...
#
# - looking at __min_samples_leaf hyper-param__, describes the minimum number of samples required to be at a leaf node...
#
# - describing __max_features hyper-param__, a hyper-param as this referes to the number of features to consider when looking for the best split, and supported choicea are *“auto”, “sqrt”, “log2”*. If “auto”, then max_features=sqrt(n_features). If “sqrt”, then max_features=sqrt(n_features) (same as “auto”). If “log2”, then max_features=log2(n_features). And, finally If None, then max_features=n_features. We notice that the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features features.
#
# - looking at __max_depth hyper-param__, it reflects the maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than *min_samples_split samples*...
#
# - viewing __class_weight hyper-param__, referes to weights associated with classes and if not given, all classes are supposed to have weight one. The *“balanced”* mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). The *“balanced_subsample”* mode is the same as “balanced” except that weights are computed based on the bootstrap sample for every tree grown.
#
#
# If we imagine to build up an *Ensemble Classifier* from the family of *Average Methods*, which state that the underlying principle leading their creation requires to build separate and single classifiers than averaging their prediction in regression context or adopting a majority vote strategy for the classification context, we can claim that amongst the purposed Knn classifier, for sure, we could employ the classifier foudn from the first three trials because of their performance metrics and also because Ensemble Methods such as Bagging Classifier, usually work fine exploiting an ensemble of independent and fine tuned classifier differently from Boosting Methods which instead are based on weak learners.
# ### Random Forest Classifiers References
# - (Ensemble, Non-Parametric Learning: RandomForest) https://scikit-learn.org/stable/modules/ensemble.html#forest
| pittsburgh-bridges-data-set-analysis/models-analyses/merge_analyses/Section 2.7 - Random Forest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fixed Point Iteration: solving $x = cos(x)$
import math
xHist = []
itMax = 10000
eps = 1.0e-14
lhs = lambda x: math.cos(x)
x = 0.0
xHist.append(x)
for i in range(1, itMax + 1):
x = math.cos(x)
xHist.append(x)
if abs(lhs(x) - x) < eps:
break
print('ans = ', x)
print('abs(error) = ', abs(lhs(x) - x))
print('# iteration = ', i)
# ## Rate of Convergence $$ q\approx\frac{\log{|\frac{x_{n+1}-x_{n}}{x_{n}-x_{n-1}}|}}{\log{|\frac{x_{n}-x_{n-1}}{x_{n-1}-x_{n-2}}|}} $$
#
# https://en.wikipedia.org/wiki/Rate_of_convergence
convergenceRate = []
for i in range(3, len(xHist)):
top = math.log(abs( (xHist[i] - xHist[i-1]) / (xHist[i-1] - xHist[i-2]) ))
bot = math.log(abs( (xHist[i-1] - xHist[i-2]) / (xHist[i-2] - xHist[i-3]) ))
convergenceRate.append(top/bot)
convergenceRate[-1]
# #### Rate of convergence for the fixed-point iteration is linear
# ### Speed test
def timeTest():
itMax = 10000
eps = 1.0e-14
lhs = lambda x: math.cos(x)
x = 0.0
for i in range(1, itMax + 1):
x = math.cos(x)
if abs(lhs(x) - x) < eps:
break
ans = x
# %%timeit -n 10
timeTest()
from scipy import optimize
ans_scipy = optimize.fixed_point(lhs,0, method = 'iteration', xtol = 1.0e-14)
ans_scipy
x - ans_scipy
math.cos(ans_scipy) - ans_scipy
from scipy.optimize import fixed_point
def timeTest2():
ans_scipy = fixed_point(lhs,0, method = 'iteration', xtol = 1.0e-14)
# %%timeit -n 10
timeTest2()
def timeTest3():
ans_scipy = fixed_point(lhs,0, method = "del2", xtol = 1.0e-14)
# %%timeit -n 10
timeTest3()
# # Fixed-point: Diverge case
lhs = lambda x: 3*math.exp(-x)-1
itMax = 10000
eps = 1.0e-14
x = 0.0
for i in range(1, itMax + 1):
x = math.cos(x)
if abs(lhs(x) - x) < eps:
break
print('ans = ', x)
print('abs(error) = ', abs(lhs(x) - x))
print('# iteration = ', i)
import sys
try:
ans_scipy = optimize.fixed_point(lhs,0, method = 'iteration', xtol = 1.0e-14, maxiter = 1000)
except Exception as e:
print(e)
print(sys.exc_info()[0])
# # Redo using: <font color=#000066 face="courier new"> nympy.optimize.fsolve </font>
import numpy as np
f = lambda x: np.cos(x) - x
import scipy
scipy.optimize.fsolve(f, 0)
# # Multiple Answer: $sin(x) = cos(x)$
# Trick: run the code multiple times, with array of initial guess
f = lambda x: np.sin(x) - np.cos(x)
scipy.optimize.fsolve(f,np.arange(-10,10,0.5))
rhs = lambda x: np.arcsin(np.cos(x)) #arcsin just gives 1 value
scipy.optimize.fixed_point(rhs,np.arange(-10,10,0.5))
rhs = lambda x: np.cos(x) - np.sin(x) + x
scipy_ans = scipy.optimize.fixed_point(rhs,np.arange(-10,10,0.5), method='iteration')
scipy_ans
scipy_ans.shape
for i,x0 in enumerate(np.arange(-10,10,0.5)):
ans = scipy.optimize.fixed_point(rhs, x0)
print("{:.8f} ".format(float(ans)),end='')
if (i+1)%5 == 0:
print('\n',end='')
def fixPoint(rhs,x):
itMax = 10000
eps = 1.0e-14
for i in range(1, itMax + 1):
x = rhs(x)
if abs(rhs(x) - x) < eps:
break
return (x)
for i,x0 in enumerate(np.arange(-10,10,0.5)):
ans = fixPoint(rhs, x0)
print("{:.8f} ".format(float(ans)),end='')
if (i+1)%5 == 0:
print('\n',end='')
rhs = lambda x: np.cos(x) - np.sin(x) + x
ans_user =[fixPoint(rhs,x) for x in np.arange(-10,10,0.5)]
ans_user
y_sol = np.zeros(len(ans_user))
x = np.arange(-15,15,0.5)
y = [math.cos(i) - math.sin(i) for i in x]
import matplotlib.pyplot as plt
import matplotlib as mpl
font = {'size': 15}
mpl.rc('font', **font)
plt.figure()
plt.plot(x,y,'-b')
plt.plot(ans_user,y_sol,'or')
plt.xlim(-10,10)
plt.title("cos(x) - sin(y): user answer")
plt.show()
plt.figure()
plt.plot(x,y,'-b')
plt.plot(scipy_ans,y_sol,'og')
plt.xlim(-10,10)
plt.title("cos(x) - sin(y): \nscipy fix-point answer")
plt.show()
plt.figure()
plt.plot(x,np.cos(x),'-b', label = 'cos')
plt.plot(x,np.sin(x),'-r', label = 'sin')
plt.plot(ans_user,np.sin(ans_user),'oc', label = 'roots')
plt.xlim(-10,10)
plt.legend(loc = 4, framealpha = 0.5)
plt.title("sin(x) & cos(x)")
plt.show()
# # Bisection Method
fx = lambda x: math.sin(x) - math.cos(x)
L = 0
R = 1
eps = 1e-14
maxIteration = 1000
xHist = []
for i in range(0, maxIteration):
M = 0.5 * (L+R)
xHist.append(M)
fL = fx(L)
fR = fx(R)
fM = fx(M)
if abs(fL) < eps:
ans = L
break
if abs(fR) < eps:
ans = R
break
if abs(fM) < eps:
ans = M
break
if ((fL > 0) and (fM < 0)) or ((fL < 0) and (fM > 0)):
R = M
elif ((fR > 0) and (fM < 0)) or ((fR < 0) and (fM > 0)):
L = M
else:
print('no answer in the given domain')
break
if abs(fM) < eps:
ans = M
break
print('ans = ', ans)
print('number of iteration = ', i)
print('error = ', fM)
convergenceRate = []
for i in range(3, len(xHist)):
top = math.log(abs( (xHist[i] - xHist[i-1]) / (xHist[i-1] - xHist[i-2]) ))
bot = math.log(abs( (xHist[i-1] - xHist[i-2]) / (xHist[i-2] - xHist[i-3]) ))
convergenceRate.append(top/bot)
convergenceRate
scipy.optimize.bisect(fx,0,1)
def myBisec(fx, L, R, eps = 1e-14, maxIteration = 1000):
xHist = []
for i in range(0, maxIteration):
M = 0.5 * (L+R)
xHist.append(M)
fL = fx(L)
fR = fx(R)
fM = fx(M)
if abs(fL) < eps:
ans = L
break
if abs(fR) < eps:
ans = R
break
if abs(fM) < eps:
ans = M
break
if ((fL > 0) and (fM < 0)) or ((fL < 0) and (fM > 0)):
R = M
elif ((fR > 0) and (fM < 0)) or ((fR < 0) and (fM > 0)):
L = M
else:
print('no answer in the given domain')
break
if abs(fM) < eps:
ans = M
break
print('ans = ', ans)
print('number of iteration = ', i)
print('error = ', fM)
convergenceRate = []
for i in range(3, len(xHist)):
top = math.log(abs( (xHist[i] - xHist[i-1]) / (xHist[i-1] - xHist[i-2]) ))
bot = math.log(abs( (xHist[i-1] - xHist[i-2]) / (xHist[i-2] - xHist[i-3]) ))
convergenceRate.append(top/bot)
print('convergence rate = ', np.mean(convergenceRate))
myBisec(fx,0,1)
# %timeit -n 10 scipy.optimize.bisect(fx,0,1)
# %timeit -n 10 myBisec(fx,0,1)
def myBisecPlain(fx, L, R, eps = 1e-14, maxIteration = 1000):
for i in range(0, maxIteration):
M = 0.5 * (L+R)
fL = fx(L)
fR = fx(R)
fM = fx(M)
if ((fL > 0) and (fM < 0)) or ((fL < 0) and (fM > 0)):
R = M
else:
L = M
if abs(fM) < eps:
ans = M
break
return(ans)
# %timeit -n 10 myBisecPlain(fx,0,1)
myBisecPlain(fx,0,1)
plt.figure(figsize = (16,8))
fx = lambda x: np.sin(x) - np.cos(x)
x = np.arange(0,1,0.05).tolist()
y = fx(x)
x2 = xHist[0:7]
y2 = fx(x2)
z = [i for i,j in enumerate(x2)]
plt.plot(x, y)
t = np.arange(0,100)
plt.scatter(x2, y2, s = 200, c = z, cmap = mpl.cm.nipy_spectral)
plt.colorbar()
plt.grid()
plt.ylabel('f(x)')
plt.xlabel('x')
plt.show()
math.log(10)
np.log(10)
np.log10(10)
math.log10(10)
def fx2(x, Re = 1e5, D = 0.052, ep = 150.0e-6 ):
return 1/x**0.5 + 4 * math.log10(ep/D/3.7 + 1.256 / Re / x**0.5)
myBisec(fx2,1e-15,1)
fx2(0.006490259249563085)
# ## Many Inputs via: <font color=#000066 face="courier new"> **keyward argument
# Additional reading / Reference
# <br>http://book.pythontips.com/en/latest/args_and_kwargs.html
# <br>https://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-in-python/
def fTest1(fn,**kw):
print(fn(**kw))
fTest1(fx2, x = 0.006, Re = 4000, ep = 500e-6)
fx2(x = 0.006, Re = 4000, ep = 500e-6)
def myBisecManyInput(fx, L, R, eps = 1e-14, maxIteration = 1000,**kw):
xHist = []
for i in range(0, maxIteration):
M = 0.5 * (L+R)
xHist.append(M)
fL = fx(L,**kw)
fR = fx(R,**kw)
fM = fx(M,**kw)
if abs(fL) < eps:
ans = L
break
if abs(fR) < eps:
ans = R
break
if abs(fM) < eps:
ans = M
break
if ((fL > 0) and (fM < 0)) or ((fL < 0) and (fM > 0)):
R = M
elif ((fR > 0) and (fM < 0)) or ((fR < 0) and (fM > 0)):
L = M
else:
print('no answer in the given domain')
break
if abs(fM) < eps:
ans = M
break
print('ans = ', ans)
print('number of iteration = ', i)
print('error = ', fM)
convergenceRate = []
for i in range(3, len(xHist)):
top = math.log(abs( (xHist[i] - xHist[i-1]) / (xHist[i-1] - xHist[i-2]) ))
bot = math.log(abs( (xHist[i-1] - xHist[i-2]) / (xHist[i-2] - xHist[i-3]) ))
convergenceRate.append(top/bot)
print('convergence rate = ', np.mean(convergenceRate))
return ans
myBisecManyInput(fx2, 1e-10,1, D = 0.2, Re = 1e6)
_
fx2(_,D = 0.2, Re = 1e6)
scipy.optimize.bisect(fx2, 1e-10, 1, args = (1e6, 0.2, 150.0e-6 ), xtol=1e-14)
fx2(_ ,1e6, 0.2, 150.0e-6)
# # Newton-Raphson Method
def fx2(x, Re = 1e5, D = 0.052, ep = 150.0e-6 ):
return 1/x**0.5 + 4 * math.log10(ep/D/3.7 + 1.256 / Re / x**0.5)
plt.figure()
x = np.linspace(0.0001,0.1, 100).tolist()
y = list(map(fx2,x))
plt.plot(x,y)
plt.title("f(Colebrook) = LHS-RHS")
plt.show()
def myNewton(fx, args = [], eps = 1e-10, x0 = 1e-9, maxIt = 1000):
for i in range(0,maxIt):
xOld = x0
slope = (fx(x0 + 0.5 * eps, *args) - fx(x0 - 0.5 * eps, *args))/eps
fxVal = fx(x0, *args)
try:
x0 = x0 - fxVal / slope
except Exception as e:
print(e)
print(sys.exc_info()[0])
print('slope = ', slope)
if abs(x0 - xOld) < eps:
print('#iteration = ', i)
print('ans = ', x0)
print('error = ', fx(x0, *args))
return x0
print('cannot find answer')
print('#iteration = ', i)
print('ans = ', x0)
return x0
myNewton(fx2, args = [1e6, 0.2, 150.0e-6 ])
myNewton(fx2, x0 = 1e-9, args = [1e6, 0.2, 150.0e-6 ])
args = [1e6, 0.2, 150.0e-6 ]
fx2(_, *args)
scipy.optimize.newton(fx2, 1e-9, args = tuple(args), tol = 1e-15)
fx2(_, *args)
def myNewtonPlain(fx, args = [], eps = 1e-10, x0 = 1e-9, maxIt = 1000):
for i in range(0,maxIt):
xOld = x0
slope = (fx(x0 + 0.5 * eps, *args) - fx(x0 - 0.5 * eps, *args))/eps
fxVal = fx(x0, *args)
try:
x0 = x0 - fxVal / slope
except Exception as e:
print(e)
print(sys.exc_info()[0])
print('slope = ', slope)
if abs(x0 - xOld) < eps:
return x0
print('cannot find answer')
print('#iteration = ', i)
print('ans = ', x0)
return x0
# %%timeit -n 10
myNewtonPlain(fx2, args = [1e6, 0.2, 150.0e-6 ])
# %%timeit -n 10
scipy.optimize.newton(fx2, 1e-9, args = tuple(args), tol = 1e-15)
# # Class with root finding method
class RootFindClass:
def __init__(self, fx, x0 = 1, LLim = -1000, RLim = 1000, xTol = 1e-14, maxIt = 1000, args = ()):
self.x0 = x0
self.LLim = LLim
self.RLim = RLim
self.xTol = xTol
self.maxIt = maxIt
self.args = args
self.fx = fx
def fix_point(self):
self.RHS = lambda x: self.fx(x) + x
return scipy.optimize.fixed_point(self.RHS, self.x0, xtol = self.xTol, args = self.args)
def bisect(self):
return scipy.optimize.bisect(self.fx, self.LLim, self.RLim, xtol = self.xTol, args = self.args)
def newton(self):
return scipy.optimize.newton(self.fx, self.x0, tol = self.xTol, args = self.args)
#operator overloading for + operation
def __add__(self, other):
return RootFindClass(lambda x: self.fx(x) + other.fx(x), self.x0,
self.LLim, self.RLim, self.xTol, self.maxIt, self.args + other.args)
# +
f1 = lambda x: math.cos(x) - x
func_1 = RootFindClass(f1, 1, 0, 2)
def print_f1():
sp_output = scipy.optimize.fixed_point(lambda x: f1(x) + x, 1, xtol = 1e-14)
user_output = func_1.fix_point()
print('fixed-point')
print('scipy output = ', sp_output)
print(' user output = ', user_output, end = '\n\n')
sp_output = scipy.optimize.bisect(f1, 0, 2, xtol = 1e-14)
user_output = func_1.bisect()
print('bisection')
print('scipy output = ', sp_output)
print(' user output = ', user_output, end = '\n\n')
sp_output = scipy.optimize.newton(f1, 1, tol = 1e-14)
user_output = func_1.newton()
print('Newton')
print('scipy output = ', sp_output)
print(' user output = ', user_output, end = '\n\n')
print_f1()
# +
def f1(x, Re = 1e5, D = 0.052, ep = 150.0e-6 ):
return 1/x**0.5 + 4 * math.log10(ep/D/3.7 + 1.256 / Re / x**0.5)
func_1 = RootFindClass(f1, 1e-9, 1e-10, 1, args = (1e6, 0.2, 150e-6))
sp_output = scipy.optimize.bisect(f1, 1e-10, 1, xtol = 1e-14, args = (1e6, 0.2, 150e-6))
user_output = func_1.bisect()
print('bisection')
print('scipy output = ', sp_output)
print(' user output = ', user_output, end = '\n\n')
sp_output = scipy.optimize.newton(f1, 1e-9, tol = 1e-14, args = (1e6, 0.2, 150e-6))
user_output = func_1.newton()
print('Newton')
print('scipy output = ', sp_output)
print(' user output = ', user_output, end = '\n\n')
# +
f1 = lambda x: math.cos(x) - math.sin(x)
func_1 = RootFindClass(f1, 1, 0, 2)
print_f1()
# +
func_a = RootFindClass(math.sin, 1, 0, 2)
func_b = RootFindClass(lambda x: -math.cos(x), 1, 0, 2)
func_1 = func_a + func_b
print_f1()
# -
func_1.newton()
# # Solving polynomial: <font color=#000066 face="courier new"> $x^2 - 7 = 0$ </font>
# numpy.roots
# pass range of initial guess and solve with scipy
ans = np.roots([1,0,-7])
ans
ans[0] ** 2
ans[1] ** 2
fx = lambda x: x**2 -7
scipy.optimize.fsolve(fx, [-5,0,5])
# ### Truncate then use set to get unique answer
ans = scipy.optimize.fsolve(fx, [-5,0,5])
{float('{0:.10f}'.format(i)) for i in ans}
# # Solving Analytically: <font color=#000066 face="courier new"> $x^2 - 7 = 0$ </font>
import sympy as sm
x, y = sm.symbols('x y')
sm.solve(x**2-7)
sm.init_printing(use_unicode=True)
sm.solve(x**2-7)
E1 = x**2 - 7
sm.solve(E1)
E2 = x**2 - y
sm.solve(E2,x)
sm.diff(E2,x)
ans = sm.diff(E2,x)
type(ans)
py_func = sm.lambdify(x, ans)
py_func(2)
type(py_func)
f, Re, ep, D = sm.symbols('f Re ep D')
E2 = 1/f**0.5 + 4 * sm.functions.log(ep/D/3.7 + 1.256 / Re / f**0.5, 10)
py_func0 = sm.lambdify(('f=0', 'Re=0', 'D=0', 'ep=0'), E2)
py_func0(Re = 1e5, f = 0.001, D = 0.05, ep = 150e-6)
def fx2(x, Re = 1e5, D = 0.052, ep = 150.0e-6 ):
return 1/x**0.5 + 4 * math.log10(ep/D/3.7 + 1.256 / Re / x**0.5)
py_func0(0.001, 1e5, 0.05, 150e-6)
fx2(0.001, 1e5, 0.05, 150e-6)
py_func0b = sm.lambdify('f=0, Re=0, D=0, ep=0', E2)
py_func0b(0.001, 1e5, 0.05, 150e-6)
py_func0b(Re = 1e5, f = 0.001, D = 0.05, ep = 150e-6)
sm.diff(E2, f)
diff_fn = sm.lambdify('f=0, Re=0, D=0, ep=0', sm.diff(E2, f))
diff_fn(Re = 1e5, f = 0.001, D = 0.05, ep = 150e-6)
# ### Central finite difference won't be exact, but it will be close
( (fx2(0.001+1e-9, 1e5, 0.05, 150e-6) - fx2(0.001-1e-9, 1e5, 0.05, 150e-6))/
2e-9)
(_ - __)/__ * 100
sm.__version__
E3 = sm.diff(E2, f)
type(E3)
# ### Substitution and evaluation
E4 = E3.subs({Re:1e5, f:0.001, D:0.05, ep:150e-6})
E4
E4.evalf()
sm.N(E4)
# # Sympy Integration
x, y = sm.symbols('x y')
sm.integrate(x,x)
sm.integrate(sm.log(x), x)
sm.integrate(sm.log(x), x).subs({x:2})
sm.integrate(sm.log(x), x).subs({x:2}).evalf()
sm.integrate(sm.log(x), (x,0,10))
x * sm.log(x)
try:
ans = 0 * math.log(0)
except Exception as e:
print(e)
print(sys.exc_info()[0])
ans = x * sm.log(x)
ans.subs({x:0})
sm.limit(ans, x, 0)
sm.integrate(sm.log(x), (x,0,1))
sm.integrate(sm.log(x), (x,0,y))
# At what y that make $\int_{0}^{y}\log{x}\, dx = 1$
ans2 = sm.integrate(sm.log(x), (x,0,y))
root_ans = scipy.optimize.fsolve(sm.lambdify(y, ans2 - 1),3)[0]
root_ans
root_ans * math.log(root_ans) - root_ans
sm.integrate(sm.log(x), (x,0,root_ans))
# # Combine Mpmath and Scipy: Ei function
#
# <br>
# <font size = 4.5> Wikipedia & Wolfram: $\operatorname{Ei}(x)=-\int_{-x}^{\infty}\frac{e^{-t}}t\,dt$
# <br> Sympy Library: $\operatorname{Ei}(x)=\int_{-\infty}^{x}\frac{e^{t}}t\,dt$
# <br> are they the same? yes <br>
# <br> for u = -t
# <br> @$t = \infty, u = -\infty$
# <br> @$t = -x, u = x$
# <br> @$dt = -du$ <br>
# <br> $\operatorname{Ei}(x)=-\int_{-x}^{\infty}\frac{e^{-t}}t\,dt
# = -\int_{u = x}^{u = -\infty}\frac{e^{u}}{-u}\,-du$
# <br><br> $-\int_{u = x}^{u = -\infty}\frac{e^{u}(-1)}{-u}\,du
# = \int_{-\infty}^{x}\frac{e^{u}}{u}\,du
# =\int_{-\infty}^{x}\frac{e^{t}}t\,dt$
# </font>
import mpmath
mpmath.ei(1)
float(mpmath.ei(1))
# We may double check the value from <a href ="https://goo.gl/EkWriV">http://www.ams.org</a>
# ## At what x, Ei(x) = 1.00000
x = np.linspace(0,1,1000)
y = list(map(mpmath.ei,x))
plt.figure()
plt.plot(x,y)
plt.show()
ans = scipy.optimize.bisect(lambda x: float(mpmath.ei(x)) - 1,0.1,0.9,xtol=1e-14)
ans
mpmath.ei(ans)
ans = scipy.optimize.fsolve(lambda x: float(mpmath.ei(float(x))) - 1,0.9,xtol=1e-14)
ans[0]
try:
ans = scipy.optimize.fsolve(lambda x: float(mpmath.ei(x)) - 1,0.9,xtol=1e-14)
except Exception as e:
print(e)
print(sys.exc_info()[0])
# We previously solve 'TypeError' by forcing the input type to mpmath.ei to be float (not array)
| Ipynb/L04_RootFinding.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# This script calculates:
# - nbr of scenes
# - nbr of images
# - nbr of vehicle positions
# - nbr of instances
#
# The directory should look something like this (as described on official Kaggle page):
#
# - root_dir:
# - day
# - test
# - images
# - labels
# - splits
# - train
# - ...
# - val
# - ...
# - night
# - test
# - ...
# - train
# - ...
# - val
# - ...
# + pycharm={"name": "#%%\n"}
import os
import json
from tqdm import tqdm
# adjust root_dir to your path
root_dir = "/media/lukas/empty/EODAN_Dataset"
# + pycharm={"name": "#%%\n"}
root_dir = os.path.abspath(root_dir)
light_cycles = ("day", "night")
splits = ("train", "test", "val")
total_stats = {"scenes": 0, "images": 0, "vehicles": 0, "vehicle_directs":
0, "instances": 0, "instance_directs": 0}
for cycle in light_cycles:
cycle_stats = {"scenes": 0, "images": 0, "vehicles": 0,
"vehicle_directs": 0, "instances": 0, "instance_directs": 0}
for split in splits:
split_stats = {"scenes": 0, "images": 0, "vehicles": 0,
"vehicle_directs": 0, "instances": 0,
"instance_directs": 0}
path = os.path.join(root_dir, cycle, split)
scenes = os.listdir(os.path.join(path, "images"))
split_stats["scenes"] = len(scenes)
images = []
# check each scene
for scene in scenes:
# get image files for each scene
images += os.listdir(os.path.join(path, "images", scene))
split_stats["images"] = len(images)
# for each image, check the annotation file
kp_annot_path = os.path.join(path, "labels/keypoints")
annot_files = os.listdir(kp_annot_path)
for img in tqdm(images):
with open(os.path.join(kp_annot_path, f"{img.split('.')[0]}"
f".json")) as f:
annot = json.load(f)["annotations"]
# nbr of vehicles
split_stats["vehicles"] += len(annot)
# nbr of direct vehicles
split_stats["vehicle_directs"] += sum([int(vehicle["direct"]) for
vehicle in annot])
# nbr of instances
split_stats["instances"] += sum([len(vehicle["instances"]) for
vehicle in annot])
# nbr of direct instances
split_stats["instance_directs"] += sum([int(inst["direct"]) for
vehicle in annot for
inst in
vehicle["instances"]])
print("\nCycle: {}\tSplit: {}".format(cycle, split))
print(split_stats)
cycle_stats = {k: cycle_stats[k]+v for k, v in split_stats.items()}
print("---------------------------------------------------")
print(f"Total stats ({cycle}):")
print(cycle_stats)
print("")
total_stats = {k: total_stats[k]+v for k, v in cycle_stats.items()}
print("\n\nTotal stats:")
print(total_stats)
# + pycharm={"name": "#%%\n"}
rel_vehicle_directs = total_stats["vehicle_directs"] / total_stats["vehicles"]
rel_instance_directs = total_stats["instance_directs"] / \
total_stats["instances"]
print("Relative directs/ indirects:")
print(f"Vehicles:\t Directs: {rel_vehicle_directs*100}%\tIndirects: "
f"{(1-rel_vehicle_directs)*100}%")
print(f"Instances:\t Directs: {rel_instance_directs*100}%\tIndirects: "
f"{(1-rel_instance_directs)*100}%")
# + pycharm={"name": "#%%\n"}
| examples/evaluateDatasetStatistics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chapter 7
#
# This is the seventh in a series of notebooks related to astronomy data.
#
# As a continuing example, we will replicate part of the analysis in a recent paper, "[Off the beaten path: Gaia reveals GD-1 stars outside of the main stream](https://arxiv.org/abs/1805.00425)" by <NAME> and <NAME>.
#
# In the previous notebook we selected photometry data from Pan-STARRS and used it to identify stars we think are likely to be in GD-1
#
# In this notebook, we'll take the results from previous lessons and use them to make a figure that tells a compelling scientific story.
# ## Outline
#
# Here are the steps in this notebook:
#
# 1. Starting with the figure from the previous notebook, we'll add annotations to present the results more clearly.
#
# 2. The we'll see several ways to customize figures to make them more appealing and effective.
#
# 3. Finally, we'll see how to make a figure with multiple panels or subplots.
#
# After completing this lesson, you should be able to
#
# * Design a figure that tells a compelling story.
#
# * Use Matplotlib features to customize the appearance of figures.
#
# * Generate a figure with multiple subplots.
# ## Installing libraries
#
# If you are running this notebook on Colab, you can run the following cell to install Astroquery and a the other libraries we'll use.
#
# If you are running this notebook on your own computer, you might have to install these libraries yourself.
#
# If you are using this notebook as part of a Carpentries workshop, you should have received setup instructions.
#
# TODO: Add a link to the instructions.
# +
# If we're running on Colab, install libraries
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
# !pip install astroquery astro-gala pyia python-wget
# -
# ## Making Figures That Tell a Story
#
# So far the figure we've made have been "quick and dirty". Mostly we have used Matplotlib's default style, although we have adjusted a few parameters, like `markersize` and `alpha`, to improve legibility.
#
# Now that the analysis is done, it's time to think more about:
#
# 1. Making professional-looking figures that are ready for publication, and
#
# 2. Making figures that communicate a scientific result clearly and compellingly.
#
# Not necessarily in that order.
# Let's start by reviewing Figure 1 from the original paper. We've seen the individual panels, but now let's look at the whole thing, along with the caption:
#
# <img width="500" src="https://github.com/datacarpentry/astronomy-python/raw/gh-pages/fig/gd1-5.png">
# **Exercise:** Think about the following questions:
#
# 1. What is the primary scientific result of this work?
#
# 2. What story is this figure telling?
#
# 3. In the design of this figure, can you identify 1-2 choices the authors made that you think are effective? Think about big-picture elements, like the number of panels and how they are arranged, as well as details like the choice of typeface.
#
# 4. Can you identify 1-2 elements that could be improved, or that you might have done differently?
# Some topics that might come up in this discussion:
#
# 1. The primary result is that the multiple stages of selection make it possible to separate likely candidates from the background more effectively than in previous work, which makes it possible to see the structure of GD-1 in "unprecedented detail".
#
# 2. The figure documents the selection process as a sequence of steps. Reading right-to-left, top-to-bottom, we see selection based on proper motion, the results of the first selection, selection based on color and magnitude, and the results of the second selection. So this figure documents the methodology and presents the primary result.
#
# 3. It's mostly black and white, with minimal use of color, so it will work well in print. The annotations in the bottom left panel guide the reader to the most important results. It contains enough technical detail for a professional audience, but most of it is also comprehensible to a more general audience. The two left panels have the same dimensions and their axes are aligned.
#
# 4. Since the panels represent a sequence, it might be better to arrange them left-to-right. The placement and size of the axis labels could be tweaked. The entire figure could be a little bigger to match the width and proportion of the caption. The top left panel has unnused white space (but that leaves space for the annotations in the bottom left).
# ## Plotting GD-1
#
# Let's start with the panel in the lower left. The following cell reloads the data.
# +
import os
from wget import download
filename = 'gd1_merged.hdf5'
path = 'https://github.com/AllenDowney/AstronomicalData/raw/main/data/'
if not os.path.exists(filename):
print(download(path+filename))
# +
import pandas as pd
selected = pd.read_hdf(filename, 'selected')
# +
import matplotlib.pyplot as plt
def plot_second_selection(df):
x = df['phi1']
y = df['phi2']
plt.plot(x, y, 'ko', markersize=0.7, alpha=0.9)
plt.xlabel('$\phi_1$ [deg]')
plt.ylabel('$\phi_2$ [deg]')
plt.title('Proper motion + photometry selection', fontsize='medium')
plt.axis('equal')
# -
# And here's what it looks like.
plt.figure(figsize=(10,2.5))
plot_second_selection(selected)
# ## Annotations
#
# The figure in the paper uses three other features to present the results more clearly and compellingly:
#
# * A vertical dashed line to distinguish the previously undetected region of GD-1,
#
# * A label that identifies the new region, and
#
# * Several annotations that combine text and arrows to identify features of GD-1.
#
# As an exercise, choose any or all of these features and add them to the figure:
#
# * To draw vertical lines, see [`plt.vlines`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.vlines.html) and [`plt.axvline`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.axvline.html#matplotlib.pyplot.axvline).
#
# * To add text, see [`plt.text`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.text.html).
#
# * To add an annotation with text and an arrow, see [plt.annotate]().
#
# And here is some [additional information about text and arrows](https://matplotlib.org/3.3.1/tutorials/text/annotations.html#plotting-guide-annotation).
# +
# Solution
# plt.axvline(-55, ls='--', color='gray',
# alpha=0.4, dashes=(6,4), lw=2)
# plt.text(-60, 5.5, 'Previously\nundetected',
# fontsize='small', ha='right', va='top');
# arrowprops=dict(color='gray', shrink=0.05, width=1.5,
# headwidth=6, headlength=8, alpha=0.4)
# plt.annotate('Spur', xy=(-33, 2), xytext=(-35, 5.5),
# arrowprops=arrowprops,
# fontsize='small')
# plt.annotate('Gap', xy=(-22, -1), xytext=(-25, -5.5),
# arrowprops=arrowprops,
# fontsize='small')
# -
# ## Customization
#
# Matplotlib provides a default style that determines things like the colors of lines, the placement of labels and ticks on the axes, and many other properties.
#
# There are several ways to override these defaults and customize your figures:
#
# * To customize only the current figure, you can call functions like `tick_params`, which we'll demonstrate below.
#
# * To customize all figures in a notebook, you use `rcParams`.
#
# * To override more than a few defaults at the same time, you can use a style sheet.
# As a simple example, notice that Matplotlib puts ticks on the outside of the figures by default, and only on the left and bottom sides of the axes.
#
# To change this behavior, you can use `gca()` to get the current axes and `tick_params` to change the settings.
#
# Here's how you can put the ticks on the inside of the figure:
#
# ```
# plt.gca().tick_params(direction='in')
# ```
#
# **Exercise:** Read the documentation of [`tick_params`](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.axes.Axes.tick_params.html) and use it to put ticks on the top and right sides of the axes.
# +
# Solution
# plt.gca().tick_params(top=True, right=True)
# -
# ## rcParams
#
# If you want to make a customization that applies to all figures in a notebook, you can use `rcParams`.
#
# Here's an example that reads the current font size from `rcParams`:
plt.rcParams['font.size']
# And sets it to a new value:
plt.rcParams['font.size'] = 14
# **Exercise:** Plot the previous figure again, and see what font sizes have changed. Look up any other element of `rcParams`, change its value, and check the effect on the figure.
# If you find yourself making the same customizations in several notebooks, you can put changes to `rcParams` in a `matplotlibrc` file, [which you can read about here](https://matplotlib.org/3.3.1/tutorials/introductory/customizing.html#customizing-with-matplotlibrc-files).
# ## Style sheets
#
# The `matplotlibrc` file is read when you import Matplotlib, so it is not easy to switch from one set of options to another.
#
# The solution to this problem is style sheets, [which you can read about here](https://matplotlib.org/3.1.1/tutorials/introductory/customizing.html).
#
# Matplotlib provides a set of predefined style sheets, or you can make your own.
#
# The following cell displays a list of style sheets installed on your system.
plt.style.available
# Note that `seaborn-paper`, `seaborn-talk` and `seaborn-poster` are particularly intended to prepare versions of a figure with text sizes and other features that work well in papers, talks, and posters.
#
# To use any of these style sheets, run `plt.style.use` like this:
#
# ```
# plt.style.use('fivethirtyeight')
# ```
# The style sheet you choose will affect the appearance of all figures you plot after calling `use`, unless you override any of the options or call `use` again.
#
# **Exercise:** Choose one of the styles on the list and select it by calling `use`. Then go back and plot one of the figures above and see what effect it has.
# If you can't find a style sheet that's exactly what you want, you can make your own. This repository includes a style sheet called `az-paper-twocol.mplstyle`, with customizations chosen by <NAME> for publication in astronomy journals.
#
# The following cell downloads the style sheet.
# +
import os
filename = 'az-paper-twocol.mplstyle'
path = 'https://github.com/AllenDowney/AstronomicalData/raw/main/data/'
if not os.path.exists(filename):
print(download(path+filename))
# -
# You can use it like this:
#
# ```
# plt.style.use('./az-paper-twocol.mplstyle')
# ```
#
# The prefix `./` tells Matplotlib to look for the file in the current directory.
# As an alternative, you can install a style sheet for your own use by putting it in your configuration directory. To find out where that is, you can run the following command:
#
# ```
# import matplotlib as mpl
#
# mpl.get_configdir()
# ```
# ## LaTeX fonts
#
# When you include mathematical expressions in titles, labels, and annotations, Matplotlib uses [`mathtext`](https://matplotlib.org/3.1.0/tutorials/text/mathtext.html) to typeset them. `mathtext` uses the same syntax as LaTeX, but it provides only a subset of its features.
#
# If you need features that are not provided by `mathtext`, or you prefer the way LaTeX typesets mathematical expressions, you can customize Matplotlib to use LaTeX.
#
# In `matplotlibrc` or in a style sheet, you can add the following line:
#
# ```
# text.usetex : true
# ```
#
# Or in a notebook you can run the following code.
#
# ```
# plt.rcParams['text.usetex'] = True
# ```
plt.rcParams['text.usetex'] = True
# If you go back and draw the figure again, you should see the difference.
#
# If you get an error message like
#
# ```
# LaTeX Error: File `type1cm.sty' not found.
# ```
#
# You might have to install a package that contains the fonts LaTeX needs. On some systems, the packages `texlive-latex-extra` or `cm-super` might be what you need. [See here for more help with this](https://stackoverflow.com/questions/11354149/python-unable-to-render-tex-in-matplotlib).
#
# In case you are curious, `cm` stands for [Computer Modern](https://en.wikipedia.org/wiki/Computer_Modern), the font LaTeX uses to typeset math.
# ## Multiple panels
#
# So far we've been working with one figure at a time, but the figure we are replicating contains multiple panels, also known as "subplots".
#
# Confusingly, Matplotlib provides *three* functions for making figures like this: `subplot`, `subplots`, and `subplot2grid`.
#
# * [`subplot`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.subplot.html) is simple and similar to MATLAB, so if you are familiar with that interface, you might like `subplot`
#
# * [`subplots`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.subplots.html) is more object-oriented, which some people prefer.
#
# * [`subplot2grid`](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.subplot2grid.html) is most convenient if you want to control the relative sizes of the subplots.
#
# So we'll use `subplot2grid`.
#
# All of these functions are easier to use if we put the code that generates each panel in a function.
# ## Upper right
#
# To make the panel in the upper right, we have to reload `centerline`.
# +
import os
filename = 'gd1_dataframe.hdf5'
path = 'https://github.com/AllenDowney/AstronomicalData/raw/main/data/'
if not os.path.exists(filename):
print(download(path+filename))
# +
import pandas as pd
centerline = pd.read_hdf(filename, 'centerline')
# -
# And define the coordinates of the rectangle we selected.
# +
pm1_min = -8.9
pm1_max = -6.9
pm2_min = -2.2
pm2_max = 1.0
pm1_rect = [pm1_min, pm1_min, pm1_max, pm1_max]
pm2_rect = [pm2_min, pm2_max, pm2_max, pm2_min]
# -
# To plot this rectangle, we'll use a feature we have not seen before: `Polygon`, which is provided by Matplotlib.
#
# To create a `Polygon`, we have to put the coordinates in an array with `x` values in the first column and `y` values in the second column.
# +
import numpy as np
vertices = np.transpose([pm1_rect, pm2_rect])
vertices
# -
# The following function takes a `DataFrame` as a parameter, plots the proper motion for each star, and adds a shaded `Polygon` to show the region we selected.
# +
from matplotlib.patches import Polygon
def plot_proper_motion(df):
pm1 = df['pm_phi1']
pm2 = df['pm_phi2']
plt.plot(pm1, pm2, 'ko', markersize=0.3, alpha=0.3)
poly = Polygon(vertices, closed=True,
facecolor='C1', alpha=0.4)
plt.gca().add_patch(poly)
plt.xlabel('$\mu_{\phi_1} [\mathrm{mas~yr}^{-1}]$')
plt.ylabel('$\mu_{\phi_2} [\mathrm{mas~yr}^{-1}]$')
plt.xlim(-12, 8)
plt.ylim(-10, 10)
# -
# Notice that `add_patch` is like `invert_yaxis`; in order to call it, we have to use `gca` to get the current axes.
#
# Here's what the new version of the figure looks like. We've changed the labels on the axes to be consistent with the paper.
# +
plt.rcParams['text.usetex'] = False
plt.style.use('default')
plot_proper_motion(centerline)
# -
# ## Upper left
#
# Now let's work on the panel in the upper left. We have to reload `candidates`.
# +
import os
filename = 'gd1_candidates.hdf5'
path = 'https://github.com/AllenDowney/AstronomicalData/raw/main/data/'
if not os.path.exists(filename):
print(download(path+filename))
# +
import pandas as pd
filename = 'gd1_candidates.hdf5'
candidate_df = pd.read_hdf(filename, 'candidate_df')
# -
# Here's a function that takes a `DataFrame` of candidate stars and plots their positions in GD-1 coordindates.
def plot_first_selection(df):
x = df['phi1']
y = df['phi2']
plt.plot(x, y, 'ko', markersize=0.3, alpha=0.3)
plt.xlabel('$\phi_1$ [deg]')
plt.ylabel('$\phi_2$ [deg]')
plt.title('Proper motion selection', fontsize='medium')
plt.axis('equal')
# And here's what it looks like.
plot_first_selection(candidate_df)
# ## Lower right
#
# For the figure in the lower right, we need to reload the merged `DataFrame`, which contains data from Gaia and photometry data from Pan-STARRS.
# +
import pandas as pd
filename = 'gd1_merged.hdf5'
merged = pd.read_hdf(filename, 'merged')
# -
# From the previous notebook, here's the function that plots the color-magnitude diagram.
# +
import matplotlib.pyplot as plt
def plot_cmd(table):
"""Plot a color magnitude diagram.
table: Table or DataFrame with photometry data
"""
y = table['g_mean_psf_mag']
x = table['g_mean_psf_mag'] - table['i_mean_psf_mag']
plt.plot(x, y, 'ko', markersize=0.3, alpha=0.3)
plt.xlim([0, 1.5])
plt.ylim([14, 22])
plt.gca().invert_yaxis()
plt.ylabel('$g_0$')
plt.xlabel('$(g-i)_0$')
# -
# And here's what it looks like.
plot_cmd(merged)
# **Exercise:** Add a few lines to `plot_cmd` to show the Polygon we selected as a shaded area.
#
# Run these cells to get the polygon coordinates we saved in the previous notebook.
# +
import os
filename = 'gd1_polygon.hdf5'
path = 'https://github.com/AllenDowney/AstronomicalData/raw/main/data/'
if not os.path.exists(filename):
print(download(path+filename))
# -
coords_df = pd.read_hdf(filename, 'coords_df')
coords = coords_df.to_numpy()
coords
# +
# Solution
#poly = Polygon(coords, closed=True,
# facecolor='C1', alpha=0.4)
#plt.gca().add_patch(poly)
# -
# ## Subplots
#
# Now we're ready to put it all together. To make a figure with four subplots, we'll use `subplot2grid`, [which requires two arguments](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.subplot2grid.html):
#
# * `shape`, which is a tuple with the number of rows and columns in the grid, and
#
# * `loc`, which is a tuple identifying the location in the grid we're about to fill.
#
# In this example, `shape` is `(2, 2)` to create two rows and two columns.
#
# For the first panel, `loc` is `(0, 0)`, which indicates row 0 and column 0, which is the upper-left panel.
#
# Here's how we use it to draw the four panels.
# +
shape = (2, 2)
plt.subplot2grid(shape, (0, 0))
plot_first_selection(candidate_df)
plt.subplot2grid(shape, (0, 1))
plot_proper_motion(centerline)
plt.subplot2grid(shape, (1, 0))
plot_second_selection(selected)
plt.subplot2grid(shape, (1, 1))
plot_cmd(merged)
poly = Polygon(coords, closed=True,
facecolor='C1', alpha=0.4)
plt.gca().add_patch(poly)
plt.tight_layout()
# -
# We use [`plt.tight_layout`](https://matplotlib.org/3.3.1/tutorials/intermediate/tight_layout_guide.html) at the end, which adjusts the sizes of the panels to make sure the titles and axis labels don't overlap.
#
# **Exercise:** See what happens if you leave out `tight_layout`.
# ## Adjusting proportions
#
# In the previous figure, the panels are all the same size. To get a better view of GD-1, we'd like to stretch the panels on the left and compress the ones on the right.
#
# To do that, we'll use the `colspan` argument to make a panel that spans multiple columns in the grid.
#
# In the following example, `shape` is `(2, 4)`, which means 2 rows and 4 columns.
#
# The panels on the left span three columns, so they are three times wider than the panels on the right.
#
# At the same time, we use `figsize` to adjust the aspect ratio of the whole figure.
# +
plt.figure(figsize=(9, 4.5))
shape = (2, 4)
plt.subplot2grid(shape, (0, 0), colspan=3)
plot_first_selection(candidate_df)
plt.subplot2grid(shape, (0, 3))
plot_proper_motion(centerline)
plt.subplot2grid(shape, (1, 0), colspan=3)
plot_second_selection(selected)
plt.subplot2grid(shape, (1, 3))
plot_cmd(merged)
poly = Polygon(coords, closed=True,
facecolor='C1', alpha=0.4)
plt.gca().add_patch(poly)
plt.tight_layout()
# -
# This is looking more and more like the figure in the paper.
#
# **Exercise:** In this example, the ratio of the widths of the panels is 3:1. How would you adjust it if you wanted the ratio to be 3:2?
# ## Summary
#
# In this notebook, we reverse-engineered the figure we've been replicating, identifying elements that seem effective and others that could be improved.
#
# We explored features Matplotlib provides for adding annotations to figures -- including text, lines, arrows, and polygons -- and several ways to customize the appearance of figures. And we learned how to create figures that contain multiple panels.
# ## Best practices
#
# * The most effective figures focus on telling a single story clearly and compellingly.
#
# * Consider using annotations to guide the readers attention to the most important elements of a figure.
#
# * The default Matplotlib style generates good quality figures, but there are several ways you can override the defaults.
#
# * If you find yourself making the same customizations on several projects, you might want to create your own style sheet.
| _build/html/_sources/07_plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Find and visualize data for prediction.
# # Importing data
# +
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# -
data = pd.read_csv("../data/energy_consumption.csv",
sep=",")
data.sample(5)
print("Number of samples: {}".format(len(data)))
# # Visualizing data
# We will visualize data for 10 days, so let's pick up 240 hours.
ten_days_data = data[:240]
x = np.array(list(ten_days_data.Datetime))
y = np.array(list(ten_days_data['PJME_MW']))
plt.xlabel("Time")
plt.ylabel("Energy consumption")
plt.xticks([i for i in range(len(x)-1) if i%((len(x)-1)//2)==0])
plt.plot(x, y, 'r')
| DataVisualization/notebooks/RegressionEnergyConsumption.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
import numpy as np
import json
from datetime import date, datetime
with_json = False
def json_serial(obj):
return int(obj.strftime("%s"))
data = pd.read_csv('flights_2006_2010.csv', sep='\t', encoding='utf-8', dtype={'FlightDate': 'str', 'ArrTime': 'str', 'DepTime': 'str'})
# +
renamed = data.rename(index=str, columns={"FlightDate": "FL_DATE", "DepTime": "DEP_TIME", "ArrTime": "ARR_TIME", "Distance": "DISTANCE", "AirTime": "AIR_TIME", "DepDelay": "DEP_DELAY", "ArrDelay": "ARR_DELAY"})
renamed['FL_DATE'] = pd.to_datetime(renamed.FL_DATE, format='%Y-%m-%d').dt.date
renamed['DEP_TIME'] = renamed.DEP_TIME.replace('2400', '0000')
renamed['ARR_TIME'] = renamed.ARR_TIME.replace('2400', '0000')
def toTime(col):
col = pd.to_numeric(col)
col = (col/100).apply(np.floor) + (col.mod(100)) / 60.
return col
renamed['DEP_TIME'] = toTime(renamed['DEP_TIME'])
renamed['ARR_TIME'] = toTime(renamed['ARR_TIME'])
types = {
'DEP_DELAY': 'int16',
'ARR_DELAY': 'int16',
'AIR_TIME': 'int16',
'DISTANCE': 'int16',
'DEP_TIME': 'float16',
'ARR_TIME': 'float16'
}
columns = ['FL_DATE'] + list(types.keys())
renamed = renamed[columns]
renamed = renamed.dropna()
right_types = renamed.astype(types)
# -
renamed.head()
for size, name in [(10000, 'flights-10k'), (200000, 'flights-200k'), (500000, 'flights-500k'), (1000000, 'flights-1m'), (3000000, 'flights-3m'), (10000000, 'flights-10m')]:
smaller = right_types[:size]
print(name, len(smaller))
table = pa.Table.from_pandas(smaller, preserve_index=False)
if with_json:
d = {}
for column in smaller.columns:
d[column]=list(smaller[column])
with open(f'{name}.json', 'w') as f:
json.dump(d, f, default=json_serial, separators=(',', ':'))
# table = table.column('ARRIVAL').cast(pa.TimestampValue, True)
# optionally, write parquet files
# pq.write_table(table, f'{name}.parquet')
writer = pa.RecordBatchFileWriter(f'{name}.arrow', table.schema)
writer.write(table)
writer.close()
# !ls -lah
| data/convert_flights.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Basic Walkthrough
import pandas as pd
import lux
# +
# Collecting basic usage statistics for Lux (For more information, see: https://tinyurl.com/logging-consent)
#lux.logger = True # Remove this line if you do not want your interactions recorded
# -
# We first load in the [Cars dataset](http://lib.stat.cmu.edu/datasets/) with 392 different cars from 1970-1982, which contains information about its Horsepower, MilesPerGal, Acceleration, etc.
df = pd.read_csv("car.csv")
df["Year"] = pd.to_datetime(df["Year"], format='%Y') # change pandas dtype for the column "Year" to datetype
# We print out the dataframe, we see the default Pandas display and we can toggle to a set of recommendations generated by Lux.
# Lux returns four sets of visualizations to show an overview of the dataset.
#
df
# Here we spot this scatterplot visualization `Acceleration` v.s. `Horsepower`.
# Intuitively, we expect cars with higher horsepower to have higher acceleration, but we are actually seeing the opposite of that trend.
#
# Let's learn more about whether there are additional factors that is affecting this relationship.
# Using the `intent` property, we indicate to Lux that we are interested in the attributes `Acceleration` and `Horsepower`.
df.intent = ["Acceleration","Horsepower"]
df
# On the left, we see that the Current Visualization corresponds to our specified intent.
# On the right, we see different tabs of recommendations:
# - `Enhance` shows what happens when we add an additional variable to the current selection
# - Then, we have the `Filter` tab, which adds an additional filter, while fixing the selected variables on the X Y axes
# - Finally, we have `Generalize` which removes one of the attributes, to display the more general trend.
#
# We can really quickly compare the relationship between `Horsepower` and `Acceleration` when an additional factor is in play. For example, we see here that `Displacement` and `Weight` is higher on the top left and lower towards the bottom right, whereas `MilesPerGal` has the opposite effect.
#
# We see that there is a strong separation between cars that have 4 cylinders (Orange) and cars with 8 cylinders (green), so we are interested in learning more about the attribute `Cylinders`.
# We can take a look at this by inspecting the `Series` corresponding to `Cylinders`. Note that Lux not only helps with visualizing dataframes, but also displays visualizations of `Series` objects.
df["Cylinders"]
# The Count distribution shows that there is not a lot of cars with 3 and 5 cylinders, so let's clean the data up to remove those.
df[df["Cylinders"]==3].to_pandas()
df[df["Cylinders"]==5].to_pandas()
# We can easily clean up the data and update recommendations in Lux, due to the tight integration with Pandas.
# Note that the intent here is kept as what we set earlier (i.e., `Horsepower`, `Acceleration`).
df = df[(df["Cylinders"]!=3) & (df["Cylinders"]!=5)]
df
# Let's say we find the time series showing the number of cars of different `Cylinder` over time to be very interesting. In particular, there seems to be a spike in production of 4 Cylinder cars in 1982. To dig into this more, we can export it by selecting the visualization and clicking on the export button.
vis = df.exported[0]
vis
# We can then print out the visualization code in [Altair](https://altair-viz.github.io/) that generated this chart so that we can further tweak the chart as desired.
print(vis.to_altair())
# +
import altair as alt
import pandas._libs.tslibs.timestamps
from pandas._libs.tslibs.timestamps import Timestamp
visData = pd.DataFrame({'Year': {0: Timestamp('1970-01-01 00:00:00'), 1: Timestamp('1970-01-01 00:00:00'), 2: Timestamp('1970-01-01 00:00:00'), 3: Timestamp('1971-01-01 00:00:00'), 4: Timestamp('1971-01-01 00:00:00'), 5: Timestamp('1971-01-01 00:00:00'), 6: Timestamp('1972-01-01 00:00:00'), 7: Timestamp('1972-01-01 00:00:00'), 8: Timestamp('1972-01-01 00:00:00'), 9: Timestamp('1973-01-01 00:00:00'), 10: Timestamp('1973-01-01 00:00:00'), 11: Timestamp('1973-01-01 00:00:00'), 12: Timestamp('1974-01-01 00:00:00'), 13: Timestamp('1974-01-01 00:00:00'), 14: Timestamp('1974-01-01 00:00:00'), 15: Timestamp('1975-01-01 00:00:00'), 16: Timestamp('1975-01-01 00:00:00'), 17: Timestamp('1975-01-01 00:00:00'), 18: Timestamp('1976-01-01 00:00:00'), 19: Timestamp('1976-01-01 00:00:00'), 20: Timestamp('1976-01-01 00:00:00'), 21: Timestamp('1977-01-01 00:00:00'), 22: Timestamp('1977-01-01 00:00:00'), 23: Timestamp('1977-01-01 00:00:00'), 24: Timestamp('1978-01-01 00:00:00'), 25: Timestamp('1978-01-01 00:00:00'), 26: Timestamp('1978-01-01 00:00:00'), 27: Timestamp('1979-01-01 00:00:00'), 28: Timestamp('1979-01-01 00:00:00'), 29: Timestamp('1979-01-01 00:00:00'), 30: Timestamp('1980-01-01 00:00:00'), 31: Timestamp('1980-01-01 00:00:00'), 32: Timestamp('1980-01-01 00:00:00'), 33: Timestamp('1982-01-01 00:00:00'), 34: Timestamp('1982-01-01 00:00:00'), 35: Timestamp('1982-01-01 00:00:00')}, 'Cylinders': {0: 8, 1: 6, 2: 4, 3: 8, 4: 6, 5: 4, 6: 8, 7: 6, 8: 4, 9: 8, 10: 6, 11: 4, 12: 8, 13: 6, 14: 4, 15: 6, 16: 4, 17: 8, 18: 4, 19: 6, 20: 8, 21: 4, 22: 6, 23: 8, 24: 8, 25: 4, 26: 6, 27: 4, 28: 6, 29: 8, 30: 6, 31: 8, 32: 4, 33: 4, 34: 8, 35: 6}, 'Record': {0: 18.0, 1: 4.0, 2: 7.0, 3: 7.0, 4: 8.0, 5: 12.0, 6: 13.0, 7: 0.0, 8: 14.0, 9: 20.0, 10: 8.0, 11: 11.0, 12: 5.0, 13: 6.0, 14: 15.0, 15: 12.0, 16: 12.0, 17: 6.0, 18: 15.0, 19: 10.0, 20: 9.0, 21: 14.0, 22: 5.0, 23: 8.0, 24: 6.0, 25: 17.0, 26: 12.0, 27: 12.0, 28: 6.0, 29: 10.0, 30: 2.0, 31: 0.0, 32: 23.0, 33: 47.0, 34: 1.0, 35: 10.0}})
chart = alt.Chart(visData).mark_line().encode(
y = alt.Y('Record', type= 'quantitative', title='Number of Records'),
x = alt.X('Year', type = 'temporal'),
)
chart = chart.interactive() # Enable Zooming and Panning
chart = chart.encode(color=alt.Color('Cylinders',type='nominal'))
chart = chart.configure_title(fontWeight=500,fontSize=13,font='Helvetica Neue')
chart = chart.configure_axis(titleFontWeight=500,titleFontSize=11,titleFont='Helvetica Neue',
labelFontWeight=400,labelFontSize=8,labelFont='Helvetica Neue',labelColor='#505050')
chart = chart.configure_legend(titleFontWeight=500,titleFontSize=10,titleFont='Helvetica Neue',
labelFontWeight=400,labelFontSize=8,labelFont='Helvetica Neue')
chart = chart.properties(width=160,height=150)
chart
# -
# This is obviously a lot of code, let's look at how easy it is if we were to specify this visualization intent in Lux if we knew what we were looking for.
from lux.vis.Vis import Vis
Vis(["Year","Cylinders"],df)
# # Creating Visualizations
# In Lux, user can specify particular visualizations that they want to specify and visualize their data on-demand.
from lux.vis.Vis import Vis
Vis(["Horsepower"],df)
Vis(["Origin","Horsepower"],df)
Vis(["Origin",lux.Clause("Horsepower",aggregation="sum")],df)
# You can also work with collections of Visualization via a `VisList` object.
from lux.vis.VisList import VisList
# For example, we can create a set of visualizations of Weight with respect to all other attributes, using the wildcard “?” symbol.
VisList(["Horsepower","?"],df)
# For more support and resources on Lux:
# - Sign up for the early-user [mailing list](https://forms.gle/XKv3ejrshkCi3FJE6) to stay tuned for upcoming releases, updates, or user studies.
# - Visit [ReadTheDoc](https://lux-api.readthedocs.io/en/latest/) for more detailed documentation.
# - Clone [lux-binder](https://github.com/lux-org/lux-binder) to try out these [hands-on exercises](https://github.com/lux-org/lux-binder/tree/master/exercise) or [tutorial series](https://github.com/lux-org/lux-binder/tree/master/tutorial) on how to use Lux.
# - Join our community [Slack](https://lux-project.slack.com/join/shared_invite/zt-iwg84wfb-fBPaGTBBZfkb9arziy3W~g) to discuss and ask questions.
# - Report any bugs, issues, or requests through [Github Issues](https://github.com/lux-org/lux/issues).
#
| Visualization/Lux_cars_demo.ipynb |