code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
|---|---|
```
%matplotlib inline
```
# Brainstorm auditory tutorial dataset
Here we compute the evoked from raw for the auditory Brainstorm
tutorial dataset. For comparison, see [1]_ and the associated
`brainstorm site <http://neuroimage.usc.edu/brainstorm/Tutorials/Auditory>`_.
Experiment:
- One subject, 2 acquisition runs 6 minutes each.
- Each run contains 200 regular beeps and 40 easy deviant beeps.
- Random ISI: between 0.7s and 1.7s seconds, uniformly distributed.
- Button pressed when detecting a deviant with the right index finger.
The specifications of this dataset were discussed initially on the
`FieldTrip bug tracker <http://bugzilla.fcdonders.nl/show_bug.cgi?id=2300>`_.
References
----------
.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis.
Computational Intelligence and Neuroscience, vol. 2011, Article ID
879716, 13 pages, 2011. doi:10.1155/2011/879716
```
# Authors: Mainak Jas <mainak.jas@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
#
# License: BSD (3-clause)
import os.path as op
import pandas as pd
import numpy as np
import mne
from mne import combine_evoked
from mne.minimum_norm import apply_inverse
from mne.datasets.brainstorm import bst_auditory
from mne.io import read_raw_ctf
print(__doc__)
```
To reduce memory consumption and running time, some of the steps are
precomputed. To run everything from scratch change this to False. With
``use_precomputed = False`` running time of this script can be several
minutes even on a fast computer.
```
use_precomputed = True
```
The data was collected with a CTF 275 system at 2400 Hz and low-pass
filtered at 600 Hz. Here the data and empty room data files are read to
construct instances of :class:`mne.io.Raw`.
```
data_path = bst_auditory.data_path()
subject = 'bst_auditory'
subjects_dir = op.join(data_path, 'subjects')
raw_fname1 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_01.ds')
raw_fname2 = op.join(data_path, 'MEG', 'bst_auditory',
'S01_AEF_20131218_02.ds')
erm_fname = op.join(data_path, 'MEG', 'bst_auditory',
'S01_Noise_20131218_01.ds')
```
In the memory saving mode we use ``preload=False`` and use the memory
efficient IO which loads the data on demand. However, filtering and some
other functions require the data to be preloaded in the memory.
```
preload = not use_precomputed
raw = read_raw_ctf(raw_fname1, preload=preload)
n_times_run1 = raw.n_times
mne.io.concatenate_raws([raw, read_raw_ctf(raw_fname2, preload=preload)])
raw_erm = read_raw_ctf(erm_fname, preload=preload)
```
Data channel array consisted of 274 MEG axial gradiometers, 26 MEG reference
sensors and 2 EEG electrodes (Cz and Pz).
In addition:
- 1 stim channel for marking presentation times for the stimuli
- 1 audio channel for the sent signal
- 1 response channel for recording the button presses
- 1 ECG bipolar
- 2 EOG bipolar (vertical and horizontal)
- 12 head tracking channels
- 20 unused channels
The head tracking channels and the unused channels are marked as misc
channels. Here we define the EOG and ECG channels.
```
raw.set_channel_types({'HEOG': 'eog', 'VEOG': 'eog', 'ECG': 'ecg'})
if not use_precomputed:
# Leave out the two EEG channels for easier computation of forward.
raw.pick_types(meg=True, eeg=False, stim=True, misc=True, eog=True,
ecg=True)
```
For noise reduction, a set of bad segments have been identified and stored
in csv files. The bad segments are later used to reject epochs that overlap
with them.
The file for the second run also contains some saccades. The saccades are
removed by using SSP. We use pandas to read the data from the csv files. You
can also view the files with your favorite text editor.
```
annotations_df = pd.DataFrame()
offset = n_times_run1
for idx in [1, 2]:
csv_fname = op.join(data_path, 'MEG', 'bst_auditory',
'events_bad_0%s.csv' % idx)
df = pd.read_csv(csv_fname, header=None,
names=['onset', 'duration', 'id', 'label'])
print('Events from run {0}:'.format(idx))
print(df)
df['onset'] += offset * (idx - 1)
annotations_df = pd.concat([annotations_df, df], axis=0)
saccades_events = df[df['label'] == 'saccade'].values[:, :3].astype(int)
# Conversion from samples to times:
onsets = annotations_df['onset'].values / raw.info['sfreq']
durations = annotations_df['duration'].values / raw.info['sfreq']
descriptions = annotations_df['label'].values
annotations = mne.Annotations(onsets, durations, descriptions)
raw.annotations = annotations
del onsets, durations, descriptions
```
Here we compute the saccade and EOG projectors for magnetometers and add
them to the raw data. The projectors are added to both runs.
```
saccade_epochs = mne.Epochs(raw, saccades_events, 1, 0., 0.5, preload=True,
reject_by_annotation=False)
projs_saccade = mne.compute_proj_epochs(saccade_epochs, n_mag=1, n_eeg=0,
desc_prefix='saccade')
if use_precomputed:
proj_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-eog-proj.fif')
projs_eog = mne.read_proj(proj_fname)[0]
else:
projs_eog, _ = mne.preprocessing.compute_proj_eog(raw.load_data(),
n_mag=1, n_eeg=0)
raw.add_proj(projs_saccade)
raw.add_proj(projs_eog)
del saccade_epochs, saccades_events, projs_eog, projs_saccade # To save memory
```
Visually inspect the effects of projections. Click on 'proj' button at the
bottom right corner to toggle the projectors on/off. EOG events can be
plotted by adding the event list as a keyword argument. As the bad segments
and saccades were added as annotations to the raw data, they are plotted as
well.
```
raw.plot(block=True)
```
Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we notch filter the data at 60, 120 and 180 to remove the
original 60 Hz artifact and the harmonics. The power spectra are plotted
before and after the filtering to show the effect. The drop after 600 Hz
appears because the data was filtered during the acquisition. In memory
saving mode we do the filtering at evoked stage, which is not something you
usually would do.
```
if not use_precomputed:
meg_picks = mne.pick_types(raw.info, meg=True, eeg=False)
raw.plot_psd(tmax=np.inf, picks=meg_picks)
notches = np.arange(60, 181, 60)
raw.notch_filter(notches, phase='zero-double', fir_design='firwin2')
raw.plot_psd(tmax=np.inf, picks=meg_picks)
```
We also lowpass filter the data at 100 Hz to remove the hf components.
```
if not use_precomputed:
raw.filter(None, 100., h_trans_bandwidth=0.5, filter_length='10s',
phase='zero-double', fir_design='firwin2')
```
Epoching and averaging.
First some parameters are defined and events extracted from the stimulus
channel (UPPT001). The rejection thresholds are defined as peak-to-peak
values and are in T / m for gradiometers, T for magnetometers and
V for EOG and EEG channels.
```
tmin, tmax = -0.1, 0.5
event_id = dict(standard=1, deviant=2)
reject = dict(mag=4e-12, eog=250e-6)
# find events
events = mne.find_events(raw, stim_channel='UPPT001')
```
The event timing is adjusted by comparing the trigger times on detected
sound onsets on channel UADC001-4408.
```
sound_data = raw[raw.ch_names.index('UADC001-4408')][0][0]
onsets = np.where(np.abs(sound_data) > 2. * np.std(sound_data))[0]
min_diff = int(0.5 * raw.info['sfreq'])
diffs = np.concatenate([[min_diff + 1], np.diff(onsets)])
onsets = onsets[diffs > min_diff]
assert len(onsets) == len(events)
diffs = 1000. * (events[:, 0] - onsets) / raw.info['sfreq']
print('Trigger delay removed (μ ± σ): %0.1f ± %0.1f ms'
% (np.mean(diffs), np.std(diffs)))
events[:, 0] = onsets
del sound_data, diffs
```
We mark a set of bad channels that seem noisier than others. This can also
be done interactively with ``raw.plot`` by clicking the channel name
(or the line). The marked channels are added as bad when the browser window
is closed.
```
raw.info['bads'] = ['MLO52-4408', 'MRT51-4408', 'MLO42-4408', 'MLO43-4408']
```
The epochs (trials) are created for MEG channels. First we find the picks
for MEG and EOG channels. Then the epochs are constructed using these picks.
The epochs overlapping with annotated bad segments are also rejected by
default. To turn off rejection by bad segments (as was done earlier with
saccades) you can use keyword ``reject_by_annotation=False``.
```
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=False,
proj=True)
```
We only use first 40 good epochs from each run. Since we first drop the bad
epochs, the indices of the epochs are no longer same as in the original
epochs collection. Investigation of the event timings reveals that first
epoch from the second run corresponds to index 182.
```
epochs.drop_bad()
epochs_standard = mne.concatenate_epochs([epochs['standard'][range(40)],
epochs['standard'][182:222]])
epochs_standard.load_data() # Resampling to save memory.
epochs_standard.resample(600, npad='auto')
epochs_deviant = epochs['deviant'].load_data()
epochs_deviant.resample(600, npad='auto')
del epochs, picks
```
The averages for each conditions are computed.
```
evoked_std = epochs_standard.average()
evoked_dev = epochs_deviant.average()
del epochs_standard, epochs_deviant
```
Typical preprocessing step is the removal of power line artifact (50 Hz or
60 Hz). Here we lowpass filter the data at 40 Hz, which will remove all
line artifacts (and high frequency information). Normally this would be done
to raw data (with :func:`mne.io.Raw.filter`), but to reduce memory
consumption of this tutorial, we do it at evoked stage. (At the raw stage,
you could alternatively notch filter with :func:`mne.io.Raw.notch_filter`.)
```
for evoked in (evoked_std, evoked_dev):
evoked.filter(l_freq=None, h_freq=40., fir_design='firwin')
```
Here we plot the ERF of standard and deviant conditions. In both conditions
we can see the P50 and N100 responses. The mismatch negativity is visible
only in the deviant condition around 100-200 ms. P200 is also visible around
170 ms in both conditions but much stronger in the standard condition. P300
is visible in deviant condition only (decision making in preparation of the
button press). You can view the topographies from a certain time span by
painting an area with clicking and holding the left mouse button.
```
evoked_std.plot(window_title='Standard', gfp=True, time_unit='s')
evoked_dev.plot(window_title='Deviant', gfp=True, time_unit='s')
```
Show activations as topography figures.
```
times = np.arange(0.05, 0.301, 0.025)
evoked_std.plot_topomap(times=times, title='Standard', time_unit='s')
evoked_dev.plot_topomap(times=times, title='Deviant', time_unit='s')
```
We can see the MMN effect more clearly by looking at the difference between
the two conditions. P50 and N100 are no longer visible, but MMN/P200 and
P300 are emphasised.
```
evoked_difference = combine_evoked([evoked_dev, -evoked_std], weights='equal')
evoked_difference.plot(window_title='Difference', gfp=True, time_unit='s')
```
Source estimation.
We compute the noise covariance matrix from the empty room measurement
and use it for the other runs.
```
reject = dict(mag=4e-12)
cov = mne.compute_raw_covariance(raw_erm, reject=reject)
cov.plot(raw_erm.info)
del raw_erm
```
The transformation is read from a file. More information about coregistering
the data, see `ch_interactive_analysis` or
:func:`mne.gui.coregistration`.
```
trans_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-trans.fif')
trans = mne.read_trans(trans_fname)
```
To save time and memory, the forward solution is read from a file. Set
``use_precomputed=False`` in the beginning of this script to build the
forward solution from scratch. The head surfaces for constructing a BEM
solution are read from a file. Since the data only contains MEG channels, we
only need the inner skull surface for making the forward solution. For more
information: `CHDBBCEJ`, :func:`mne.setup_source_space`,
`create_bem_model`, :func:`mne.bem.make_watershed_bem`.
```
if use_precomputed:
fwd_fname = op.join(data_path, 'MEG', 'bst_auditory',
'bst_auditory-meg-oct-6-fwd.fif')
fwd = mne.read_forward_solution(fwd_fname)
else:
src = mne.setup_source_space(subject, spacing='ico4',
subjects_dir=subjects_dir, overwrite=True)
model = mne.make_bem_model(subject=subject, ico=4, conductivity=[0.3],
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(evoked_std.info, trans=trans, src=src,
bem=bem)
inv = mne.minimum_norm.make_inverse_operator(evoked_std.info, fwd, cov)
snr = 3.0
lambda2 = 1.0 / snr ** 2
del fwd
```
The sources are computed using dSPM method and plotted on an inflated brain
surface. For interactive controls over the image, use keyword
``time_viewer=True``.
Standard condition.
```
stc_standard = mne.minimum_norm.apply_inverse(evoked_std, inv, lambda2, 'dSPM')
brain = stc_standard.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_standard, brain
```
Deviant condition.
```
stc_deviant = mne.minimum_norm.apply_inverse(evoked_dev, inv, lambda2, 'dSPM')
brain = stc_deviant.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.1, time_unit='s')
del stc_deviant, brain
```
Difference.
```
stc_difference = apply_inverse(evoked_difference, inv, lambda2, 'dSPM')
brain = stc_difference.plot(subjects_dir=subjects_dir, subject=subject,
surface='inflated', time_viewer=False, hemi='lh',
initial_time=0.15, time_unit='s')
```
|
github_jupyter
|
```
from http.server import BaseHTTPRequestHandler, HTTPServer
from urllib import parse
import random
import string
import hashlib
import base64
from typing import Any
import webbrowser
import requests
from oauthlib.oauth2 import WebApplicationClient
class OAuthHttpServer(HTTPServer):
def __init__(self, *args, **kwargs):
HTTPServer.__init__(self, *args, **kwargs)
self.authorization_code = ""
class OAuthHttpHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header("Content-Type", "text/html")
self.end_headers()
self.wfile.write("<script type=\"application/javascript\">window.close();</script>".encode("UTF-8"))
parsed = parse.urlparse(self.path)
qs = parse.parse_qs(parsed.query)
self.server.authorization_code = qs["code"][0]
def generate_code() -> tuple[str, str]:
rand = random.SystemRandom()
code_verifier = ''.join(rand.choices(string.ascii_letters + string.digits, k=128))
code_sha_256 = hashlib.sha256(code_verifier.encode('utf-8')).digest()
b64 = base64.urlsafe_b64encode(code_sha_256)
code_challenge = b64.decode('utf-8').replace('=', '')
return (code_verifier, code_challenge)
def login(config: dict[str, Any]) -> str:
with OAuthHttpServer(('', config["port"]), OAuthHttpHandler) as httpd:
client = WebApplicationClient(config["client_id"])
code_verifier, code_challenge = generate_code()
auth_uri = client.prepare_request_uri(config["auth_uri"], redirect_uri=config["redirect_uri"],
scope=config["scopes"], state="test_doesnotmatter", code_challenge= code_challenge, code_challenge_method = "S256" )
webbrowser.open_new(auth_uri)
httpd.handle_request()
auth_code = httpd.authorization_code
data = {
"code": auth_code,
"client_id": config["client_id"],
"grant_type": "authorization_code",
"scopes": config["scopes"],
"redirect_uri": config["redirect_uri"],
"code_verifier": code_verifier
}
response = requests.post(config["token_uri"], data=data, verify=False)
access_token = response.json()["access_token"]
clear_output()
print("Logged in successfully")
return access_token
from IPython.display import clear_output
config = {
"port": 8888,
"client_id": "python-nb",
"redirect_uri": f"http://localhost:8888",
"auth_uri": "https://localhost:44300/connect/authorize",
"token_uri": "https://localhost:44300/connect/token",
"scopes": [ "openid", "profile", "api" ]
}
access_token = login(config)
headers = { "Authorization": "Bearer " + access_token }
import json
response = requests.get("https://localhost:44301/weatherforecast", headers=headers, verify=False)
print(json.dumps(response.json(), indent=4))
```
|
github_jupyter
|
<img src="images/dask_horizontal.svg" align="right" width="30%">
# Lazy execution
Here we discuss some of the concepts behind dask, and lazy execution of code. You do not need to go through this material if you are eager to get on with the tutorial, but it may help understand the concepts underlying dask, how these things fit in with techniques you might already be using, and how to understand things that can go wrong.
## Prelude
As Python programmers, you probably already perform certain *tricks* to enable computation of larger-than-memory datasets, parallel execution or delayed/background execution. Perhaps with this phrasing, it is not clear what we mean, but a few examples should make things clearer. The point of Dask is to make simple things easy and complex things possible!
Aside from the [detailed introduction](http://dask.pydata.org/en/latest/), we can summarize the basics of Dask as follows:
- process data that doesn't fit into memory by breaking it into blocks and specifying task chains
- parallelize execution of tasks across cores and even nodes of a cluster
- move computation to the data rather than the other way around, to minimize communication overhead
All of this allows you to get the most out of your computation resources, but program in a way that is very familiar: for-loops to build basic tasks, Python iterators, and the NumPy (array) and Pandas (dataframe) functions for multi-dimensional or tabular data, respectively.
The remainder of this notebook will take you through the first of these programming paradigms. This is more detail than some users will want, who can skip ahead to the iterator, array and dataframe sections; but there will be some data processing tasks that don't easily fit into those abstractions and need to fall back to the methods here.
We include a few examples at the end of the notebooks showing that the ideas behind how Dask is built are not actually that novel, and experienced programmers will have met parts of the design in other situations before. Those examples are left for the interested.
## Dask is a graph execution engine
Dask allows you to construct a prescription for the calculation you want to carry out. That may sound strange, but a simple example will demonstrate that you can achieve this while programming with perfectly ordinary Python functions and for-loops. We saw this in Chapter 02.
```
from dask import delayed
@delayed
def inc(x):
return x + 1
@delayed
def add(x, y):
return x + y
```
Here we have used the delayed annotation to show that we want these functions to operate lazily — to save the set of inputs and execute only on demand. `dask.delayed` is also a function which can do this, without the annotation, leaving the original function unchanged, e.g.,
```python
delayed_inc = delayed(inc)
```
```
# this looks like ordinary code
x = inc(15)
y = inc(30)
total = add(x, y)
# incx, incy and total are all delayed objects.
# They contain a prescription of how to execute
```
Calling a delayed function created a delayed object (`incx, incy, total`) - examine these interactively. Making these objects is somewhat equivalent to constructs like the `lambda` or function wrappers (see below). Each holds a simple dictionary describing the task graph, a full specification of how to carry out the computation.
We can visualize the chain of calculations that the object `total` corresponds to as follows; the circles are functions, rectangles are data/results.
```
total.visualize()
```
But so far, no functions have actually been executed. This demonstrated the division between the graph-creation part of Dask (`delayed()`, in this example) and the graph execution part of Dask.
To run the "graph" in the visualization, and actually get a result, do:
```
# execute all tasks
total.compute()
```
**Why should you care about this?**
By building a specification of the calculation we want to carry out before executing anything, we can pass the specification to an *execution engine* for evaluation. In the case of Dask, this execution engine could be running on many nodes of a cluster, so you have access to the full number of CPU cores and memory across all the machines. Dask will intelligently execute your calculation with care for minimizing the amount of data held in memory, while parallelizing over the tasks that make up a graph. Notice that in the animated diagram below, where four workers are processing the (simple) graph, execution progresses vertically up the branches first, so that intermediate results can be expunged before moving onto a new branch.
With `delayed` and normal pythonic looped code, very complex graphs can be built up and passed on to Dask for execution. See a nice example of [simulated complex ETL](https://blog.dask.org/2017/01/24/dask-custom) work flow.

### Exercise
We will apply `delayed` to a real data processing task, albeit a simple one.
Consider reading three CSV files with `pd.read_csv` and then measuring their total length. We will consider how you would do this with ordinary Python code, then build a graph for this process using delayed, and finally execute this graph using Dask, for a handy speed-up factor of more than two (there are only three inputs to parallelize over).
```
%run prep.py -d accounts
import pandas as pd
import os
filenames = [os.path.join('data', 'accounts.%d.csv' % i) for i in [0, 1, 2]]
filenames
%%time
# normal, sequential code
a = pd.read_csv(filenames[0])
b = pd.read_csv(filenames[1])
c = pd.read_csv(filenames[2])
na = len(a)
nb = len(b)
nc = len(c)
total = sum([na, nb, nc])
print(total)
```
Your task is to recreate this graph again using the delayed function on the original Python code. The three functions you want to delay are `pd.read_csv`, `len` and `sum`..
```python
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
...
total = ...
# execute
%time total.compute()
```
```
# your verbose code here
```
Next, repeat this using loops, rather than writing out all the variables.
```
# your concise code here
## verbose version
delayed_read_csv = delayed(pd.read_csv)
a = delayed_read_csv(filenames[0])
b = delayed_read_csv(filenames[1])
c = delayed_read_csv(filenames[2])
delayed_len = delayed(len)
na = delayed_len(a)
nb = delayed_len(b)
nc = delayed_len(c)
delayed_sum = delayed(sum)
total = delayed_sum([na, nb, nc])
%time print(total.compute())
## concise version
csvs = [delayed(pd.read_csv)(fn) for fn in filenames]
lens = [delayed(len)(csv) for csv in csvs]
total = delayed(sum)(lens)
%time print(total.compute())
```
**Notes**
Delayed objects support various operations:
```python
x2 = x + 1
```
if `x` was a delayed result (like `total`, above), then so is `x2`. Supported operations include arithmetic operators, item or slice selection, attribute access and method calls - essentially anything that could be phrased as a `lambda` expression.
Operations which are *not* supported include mutation, setter methods, iteration (for) and bool (predicate).
## Appendix: Further detail and examples
The following examples show that the kinds of things Dask does are not so far removed from normal Python programming when dealing with big data. These examples are **only meant for experts**, typical users can continue with the next notebook in the tutorial.
### Example 1: simple word count
This directory contains a file called `README.md`. How would you count the number of words in that file?
The simplest approach would be to load all the data into memory, split on whitespace and count the number of results. Here we use a regular expression to split words.
```
import re
splitter = re.compile('\w+')
with open('README.md', 'r') as f:
data = f.read()
result = len(splitter.findall(data))
result
```
The trouble with this approach is that it does not scale - if the file is very large, it, and the generated list of words, might fill up memory. We can easily avoid that, because we only need a simple sum, and each line is totally independent of the others. Now we evaluate each piece of data and immediately free up the space again, so we could perform this on arbitrarily-large files. Note that there is often a trade-off between time-efficiency and memory footprint: the following uses very little memory, but may be slower for files that do not fill a large faction of memory. In general, one would like chunks small enough not to stress memory, but big enough for efficient use of the CPU.
```
result = 0
with open('README.md', 'r') as f:
for line in f:
result += len(splitter.findall(line))
result
```
### Example 2: background execution
There are many tasks that take a while to complete, but don't actually require much of the CPU, for example anything that requires communication over a network, or input from a user. In typical sequential programming, execution would need to halt while the process completes, and then continue execution. That would be dreadful for a user experience (imagine the slow progress bar that locks up the application and cannot be canceled), and wasteful of time (the CPU could have been doing useful work in the meantime.
For example, we can launch processes and get their output as follows:
```python
import subprocess
p = subprocess.Popen(command, stdout=subprocess.PIPE)
p.returncode
```
The task is run in a separate process, and the return-code will remain `None` until it completes, when it will change to `0`. To get the result back, we need `out = p.communicate()[0]` (which would block if the process was not complete).
Similarly, we can launch Python processes and threads in the background. Some methods allow mapping over multiple inputs and gathering the results, more on that later. The thread starts and the cell completes immediately, but the data associated with the download only appears in the queue object some time later.
```
# Edit sources.py to configure source locations
import sources
sources.lazy_url
import threading
import queue
import urllib
def get_webdata(url, q):
u = urllib.request.urlopen(url)
# raise ValueError
q.put(u.read())
q = queue.Queue()
t = threading.Thread(target=get_webdata, args=(sources.lazy_url, q))
t.start()
# fetch result back into this thread. If the worker thread is not done, this would wait.
q.get()
```
Consider: what would you see if there had been an exception within the `get_webdata` function? You could uncomment the `raise` line, above, and re-execute the two cells. What happens? Is there any way to debug the execution to find the root cause of the error?
### Example 3: delayed execution
There are many ways in Python to specify the computation you want to execute, but only run it *later*.
```
def add(x, y):
return x + y
# Sometimes we defer computations with strings
x = 15
y = 30
z = "add(x, y)"
eval(z)
# we can use lambda or other "closure"
x = 15
y = 30
z = lambda: add(x, y)
z()
# A very similar thing happens in functools.partial
import functools
z = functools.partial(add, x, y)
z()
# Python generators are delayed execution by default
# Many Python functions expect such iterable objects
def gen():
res = x
yield res
res += y
yield y
g = gen()
# run once: we get one value and execution halts within the generator
# run again and the execution completes
next(g)
```
### Dask graphs
Any Dask object, such as `total`, above, has an attribute which describes the calculations necessary to produce that result. Indeed, this is exactly the graph that we have been talking about, which can be visualized. We see that it is a simple dictionary, the keys are unique task identifiers, and the values are the functions and inputs for calculation.
`delayed` is a handy mechanism for creating the Dask graph, but the adventerous may wish to play with the full fexibility afforded by building the graph dictionaries directly. Detailed information can be found [here](http://dask.pydata.org/en/latest/graphs.html).
```
total.dask
dict(total.dask)
```
|
github_jupyter
|
# Setup
```
# Dependencies
import pandas as pd
from splinter import Browser
from bs4 import BeautifulSoup
from webdriver_manager.chrome import ChromeDriverManager
import requests
```
# Webscraping
## Nasa Mars News
```
# Setup splinter
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
# URL of page to be scraped
url = 'https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest'
browser.visit(url)
# Create BeautifulSoup object; parse with 'html.parser'
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
# Get a list of all news division and pick out the latest one
results = soup.find_all('div', class_='list_text')
result = results[0]
try:
# Identify and return title and paragraph
news_title = result.find('div', class_='content_title').a.text.strip()
news_paragraph = result.find('div', class_='article_teaser_body').text.strip()
# Print title and paragraph
print(news_title)
print(news_paragraph)
except Exception as e:
print(e)
browser.quit()
```
## Mars Image
```
# Setup splinter
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
# URL of page to be scraped
url = 'https://data-class-jpl-space.s3.amazonaws.com/JPL_Space/index.html'
browser.visit(url)
# Create BeautifulSoup object; parse with 'html.parser'
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
header = soup.find('div', class_='header')
browser.links.find_by_partial_text('FULL IMAGE').click()
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
image_box = soup.find('div', class_='fancybox-inner')
featured_image_url = url.replace('index.html', '') + image_box.img['src']
featured_image_url
browser.quit()
```
## Mars Facts
```
# Setup splinter
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
# URL of page to be scraped
url = 'https://space-facts.com/mars/'
browser.visit(url)
# Create BeautifulSoup object; parse with 'html.parser'
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
tables = pd.read_html(url)
table = tables[0]
table.columns = ['Key', 'Value']
table
table.to_html()
browser.quit()
```
## Mars Hemispheres
```
# Setup splinter
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
# URL of page to be scraped
url = 'https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars'
browser.visit(url)
# Create BeautifulSoup object; parse with 'html.parser'
html = browser.html
soup = BeautifulSoup(html, 'html.parser')
images = soup.find_all('div', class_='description')
image_list = []
for image in images:
image_dict = {}
image_title = image.a.h3.text
image_dict['title'] = image_title
browser.links.find_by_partial_text(image_title).click()
new_html = browser.html
new_soup = BeautifulSoup(new_html, 'html.parser')
download = new_soup.find('div', class_='downloads')
original = download.find_all('li')[1].a['href']
image_dict['img_url'] = original
image_list.append(image_dict)
browser.back()
image_list
browser.quit()
```
|
github_jupyter
|
# Project 3: Implement SLAM
---
## Project Overview
In this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!
SLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem.
Using what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`.
> `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the world
You can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:
```
mu = matrix([[Px0],
[Py0],
[Px1],
[Py1],
[Lx0],
[Ly0],
[Lx1],
[Ly1]])
```
You can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector.
## Generating an environment
In a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.
---
## Create the world
Use the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds!
`data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`.
#### Helper functions
You will be working with the `robot` class that may look familiar from the first notebook,
In fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook.
```
import numpy as np
from helpers import make_data
# your implementation of slam should work with the following inputs
# feel free to change these input values and see how it responds!
# world parameters
num_landmarks = 5 # number of landmarks
N = 20 # time steps
world_size = 100.0 # size of world (square)
# robot parameters
measurement_range = 50.0 # range at which we can sense landmarks
motion_noise = 2.0 # noise in robot motion
measurement_noise = 2.0 # noise in the measurements
distance = 20.0 # distance by which robot (intends to) move each iteratation
# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks
data = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance)
```
### A note on `make_data`
The function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:
1. Instantiating a robot (using the robot class)
2. Creating a grid world with landmarks in it
**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**
The `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.
In `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:
```
measurement = data[i][0]
motion = data[i][1]
```
```
# print out some stats about the data
time_step = 0
print('Example measurements: \n', data[time_step][0])
print('\n')
print('Example motion: \n', data[time_step][1])
```
Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam.
## Initialize Constraints
One of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.
<img src='images/motion_constraint.png' width=50% height=50% />
In *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.
<img src='images/constraints2D.png' width=50% height=50% />
You may also choose to create two of each omega and xi (one for x and one for y positions).
### TODO: Write a function that initializes omega and xi
Complete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.
*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!*
```
def initialize_constraints(N, num_landmarks, world_size):
''' This function takes in a number of time steps N, number of landmarks, and a world_size,
and returns initialized constraint matrices, omega and xi.'''
## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable
num_poses = N
rows = (num_poses * 2) + (num_landmarks * 2)
cols = (num_poses * 2) + (num_landmarks * 2)
initial_x = world_size/2
initial_y = world_size/2
Px_initial = 0
Py_initial = 1
## TODO: Define the constraint matrix, Omega, with two initial "strength" values
## for the initial x, y location of our robot
# omega = [0]
omega = np.zeros(shape=(rows,cols))
omega[Px_initial][Px_initial] = 1
omega[Py_initial][Py_initial] = 1
## TODO: Define the constraint *vector*, xi
## you can assume that the robot starts out in the middle of the world with 100% confidence
# xi = [0]
xi = np.zeros(shape=(cols,1))
xi[Px_initial] = initial_x
xi[Py_initial] = initial_y
return omega, xi
```
### Test as you go
It's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.
Below, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.
**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.
This code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`.
```
# import data viz resources
import matplotlib.pyplot as plt
from pandas import DataFrame
import seaborn as sns
%matplotlib inline
# define a small N and world_size (small for ease of visualization)
N_test = 5
num_landmarks_test = 2
small_world = 10
# initialize the constraints
initial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world)
# define figure size
plt.rcParams["figure.figsize"] = (10,7)
# display omega
sns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5)
# define figure size
plt.rcParams["figure.figsize"] = (1,7)
# display xi
sns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5)
```
---
## SLAM inputs
In addition to `data`, your slam function will also take in:
* N - The number of time steps that a robot will be moving and sensing
* num_landmarks - The number of landmarks in the world
* world_size - The size (w/h) of your world
* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`
* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise`
#### A note on noise
Recall that `omega` holds the relative "strengths" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`.
### TODO: Implement Graph SLAM
Follow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation!
#### Updating with motion and measurements
With a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\mu = \Omega^{-1}\xi$
**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!**
```
## TODO: Complete the code to implement SLAM
## slam takes in 6 arguments and returns mu,
## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations
def slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):
omega, xi = initialize_constraints(N, num_landmarks, world_size)
# Iterate through each time step in the data
for time_step in range(N-1):
# Retrieve all the motion and measurement data for this time_step
measurement = data[time_step][0]
motion = data[time_step][1]
dx = motion[0] # distance to be moved along x in this time_step
dy = motion[1] # distance to be moved along y in this time_step
'''Consider that the robot moves from (x0,y0) to (x1,y1) in this time_step'''
# even-numbered columns of omega correspond to x values
x0 = (time_step * 2) # x0 = 0,2,4,...
x1 = x0 + 2 # x0 = 2,4,6,...
# odd-numbered columns of omega correspond to y values
y0 = x0 + 1 # y0 = 1,3,5,...
y1 = y0 + 2 # y1 = 3,5,7,...
# Update omega and xi to account for all measurements
# Measurement noise taken into account
for landmark in measurement:
lm = landmark[0] # landmark id
dx_lm = landmark[1] # separation along x from current position
dy_lm = landmark[2] # separation along y from current position
Lx0 = (N * 2) + (lm * 2) # even-numbered columns have x values of landmarks
Ly0 = Lx0 + 1 # odd-numbered columns have y values of landmarks
# update omega values corresponding to measurement between x0 and Lx0
omega[ x0 ][ x0 ] += 1.0/measurement_noise
omega[ Lx0 ][ Lx0 ] += 1.0/measurement_noise
omega[ x0 ][ Lx0 ] += -1.0/measurement_noise
omega[ Lx0 ][ x0 ] += -1.0/measurement_noise
# update omega values corresponding to measurement between y0 and Ly0
omega[ y0 ][ y0 ] += 1.0/measurement_noise
omega[ Ly0 ][ Ly0 ] += 1.0/measurement_noise
omega[ y0 ][ Ly0 ] += -1.0/measurement_noise
omega[ Ly0 ][ y0 ] += -1.0/measurement_noise
# update xi values corresponding to measurement between x0 and Lx0
xi[x0] -= dx_lm/measurement_noise
xi[Lx0] += dx_lm/measurement_noise
# update xi values corresponding to measurement between y0 and Ly0
xi[y0] -= dy_lm/measurement_noise
xi[Ly0] += dy_lm/measurement_noise
# Update omega and xi to account for motion from (x0,y0) to (x1,y1)
# Motion noise taken into account
omega[x0][x0] += 1.0/motion_noise
omega[x1][x1] += 1.0/motion_noise
omega[x0][x1] += -1.0/motion_noise
omega[x1][x0] += -1.0/motion_noise
omega[y0][y0] += 1.0/motion_noise
omega[y1][y1] += 1.0/motion_noise
omega[y0][y1] += -1.0/motion_noise
omega[y1][y0] += -1.0/motion_noise
xi[x0] -= dx/motion_noise
xi[y0] -= dy/motion_noise
xi[x1] += dx/motion_noise
xi[y1] += dy/motion_noise
# Compute the best estimate of poses and landmark positions
# using the formula, omega_inverse * xi
omega_inv = np.linalg.inv(np.matrix(omega))
mu = omega_inv*xi
return mu # return `mu`
```
## Helper functions
To check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists.
Then, we define a function that nicely print out these lists; both of these we will call, in the next step.
```
# a helper function that creates a list of poses and of landmarks for ease of printing
# this only works for the suggested constraint architecture of interlaced x,y poses
def get_poses_landmarks(mu, N):
# create a list of poses
poses = []
for i in range(N):
poses.append((mu[2*i].item(), mu[2*i+1].item()))
# create a list of landmarks
landmarks = []
for i in range(num_landmarks):
landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item()))
# return completed lists
return poses, landmarks
def print_all(poses, landmarks):
print('\n')
print('Estimated Poses:')
for i in range(len(poses)):
print('['+', '.join('%.3f'%p for p in poses[i])+']')
print('\n')
print('Estimated Landmarks:')
for i in range(len(landmarks)):
print('['+', '.join('%.3f'%l for l in landmarks[i])+']')
```
## Run SLAM
Once you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks!
### What to Expect
The `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.
With these values in mind, you should expect to see a result that displays two lists:
1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.
2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length.
#### Landmark Locations
If you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement).
```
# call your implementation of slam, passing in the necessary parameters
mu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise)
# print out the resulting landmarks and poses
if(mu is not None):
# get the lists of poses and landmarks
# and print them out
poses, landmarks = get_poses_landmarks(mu, N)
print_all(poses, landmarks)
```
## Visualize the constructed world
Finally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!
**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.**
```
# import the helper function
from helpers import display_world
# Display the final world!
# define figure size
plt.rcParams["figure.figsize"] = (20,20)
# check if poses has been created
if 'poses' in locals():
# print out the last pose
print('Last pose: ', poses[-1])
# display the last position of the robot *and* the landmark positions
display_world(int(world_size), poses[-1], landmarks)
```
### Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?
You can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters.
**Answer**: (Write your answer here.)
## Testing
To confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix.
### Submit your project
If you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit!
```
# Here is the data and estimated outputs for test case 1
test_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]
## Test Case 1
##
# Estimated Pose(s):
# [50.000, 50.000]
# [37.858, 33.921]
# [25.905, 18.268]
# [13.524, 2.224]
# [27.912, 16.886]
# [42.250, 30.994]
# [55.992, 44.886]
# [70.749, 59.867]
# [85.371, 75.230]
# [73.831, 92.354]
# [53.406, 96.465]
# [34.370, 100.134]
# [48.346, 83.952]
# [60.494, 68.338]
# [73.648, 53.082]
# [86.733, 38.197]
# [79.983, 20.324]
# [72.515, 2.837]
# [54.993, 13.221]
# [37.164, 22.283]
# Estimated Landmarks:
# [82.679, 13.435]
# [70.417, 74.203]
# [36.688, 61.431]
# [18.705, 66.136]
# [20.437, 16.983]
### Uncomment the following three lines for test case 1 and compare the output to the values above ###
mu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_1, 20)
print_all(poses, landmarks)
# Here is the data and estimated outputs for test case 2
test_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]]
## Test Case 2
##
# Estimated Pose(s):
# [50.000, 50.000]
# [69.035, 45.061]
# [87.655, 38.971]
# [76.084, 55.541]
# [64.283, 71.684]
# [52.396, 87.887]
# [44.674, 68.948]
# [37.532, 49.680]
# [31.392, 30.893]
# [24.796, 12.012]
# [33.641, 26.440]
# [43.858, 43.560]
# [54.735, 60.659]
# [65.884, 77.791]
# [77.413, 94.554]
# [96.740, 98.020]
# [76.149, 99.586]
# [70.211, 80.580]
# [64.130, 61.270]
# [58.183, 42.175]
# Estimated Landmarks:
# [76.777, 42.415]
# [85.109, 76.850]
# [13.687, 95.386]
# [59.488, 39.149]
# [69.283, 93.654]
### Uncomment the following three lines for test case 2 and compare to the values above ###
mu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)
poses, landmarks = get_poses_landmarks(mu_2, 20)
print_all(poses, landmarks)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/niranjana1997/RIT-DSCI-633-FDS/blob/main/Assignments/DSCI_633_Assn_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Python ≥3.5 is required
import sys
# Scikit-Learn ≥0.20 is required
import sklearn
# Common imports
import numpy as np
import os
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
import pandas as pd
def load_data(titanic_path):
csv_path = os.path.join(titanic_path)
return pd.read_csv(csv_path)
titanic_data = load_data("/content/train.csv")
titanic_data.head()
titanic_data.describe()
# Execrise: print the count of different values for "ocean proximity"
titanic_data.hist(bins=50, figsize=(20,15))
save_fig("attribute_histogram_plots")
plt.show()
titanic_1 = load_data("/content/train.csv")
titanic_1.head(10)
titanic_2 = load_data("/content/test.csv")
titanic_2.head(10)
df = pd.concat([titanic_1,titanic_2])
df = df[['PassengerId','Age','Pclass','Sex','Embarked','Survived']].copy()
df = df.dropna()
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
df['Sex'] = encoder.fit_transform(df['Sex'])
df['Embarked'] = encoder.fit_transform(df['Sex'])
df = df.astype(int)
df.isnull().sum()
df
import seaborn as sns
plot = sns.FacetGrid(df, 'Pclass', 'Sex')
plot.map(plt.hist, 'Age', alpha=0.5, bins=10)
plot.add_legend()
# Using train_test_split()
def split_train_test(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
# delete
train_set, test_set = split_train_test(df, 0.2)
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
train_set, test_set = train_test_split(df, test_size=0.2, random_state=42)
len(df)
len(train_set)
len(test_set)
X_train = train_set.drop('Survived', axis=1)
Y_train = train_set['Survived']
X_test = test_set.drop('PassengerId', axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
log_reg = LogisticRegression()
log_reg.fit(X_train, Y_train)
Y_pred = log_reg.predict(X_test)
score = log_reg.score(X_train, Y_train)
score
```
|
github_jupyter
|
```
import nltk
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
paragraph = """I have three visions for India. In 3000 years of our history, people from all over
the world have come and invaded us, captured our lands, conquered our minds.
From Alexander onwards, the Greeks, the Turks, the Moguls, the Portuguese, the British,
the French, the Dutch, all of them came and looted us, took over what was ours.
Yet we have not done this to any other nation. We have not conquered anyone.
We have not grabbed their land, their culture,
their history and tried to enforce our way of life on them.
Why? Because we respect the freedom of others.That is why my
first vision is that of freedom. I believe that India got its first vision of
this in 1857, when we started the War of Independence. It is this freedom that
we must protect and nurture and build on. If we are not free, no one will respect us.
My second vision for India’s development. For fifty years we have been a developing nation.
It is time we see ourselves as a developed nation. We are among the top 5 nations of the world
in terms of GDP. We have a 10 percent growth rate in most areas. Our poverty levels are falling.
Our achievements are being globally recognised today. Yet we lack the self-confidence to
see ourselves as a developed nation, self-reliant and self-assured. Isn’t this incorrect?
I have a third vision. India must stand up to the world. Because I believe that unless India
stands up to the world, no one will respect us. Only strength respects strength. We must be
strong not only as a military power but also as an economic power. Both must go hand-in-hand.
My good fortune was to have worked with three great minds. Dr. Vikram Sarabhai of the Dept. of
space, Professor Satish Dhawan, who succeeded him and Dr. Brahm Prakash, father of nuclear material.
I was lucky to have worked with all three of them closely and consider this the great opportunity of my life.
I see four milestones in my career"""
sentences = nltk.sent_tokenize(paragraph)
lemmatizer = WordNetLemmatizer()
for i in range(len(sentences)):
words = nltk.word_tokenize(sentences[i])
words = [lemmatizer.lemmatize(word) for word in words if word not in set(stopwords.words('english'))]
sentences[i] = ' '.join(words)
sentences
```
|
github_jupyter
|
# Bilayer Sonophore model: pre-computation of intermolecular pressure
Profiled simulations of the mechanical model in isolation reveal that the spatial integration of intermolecular pressure $P_M$ is by far the longest internal computation at each iteration. Hence, we seek to decrease its computational cost.
Luckily, despite its complexity, this integrated pressure term depends solely on leaflet deflection and the nature of its profile is similar to that of its local counterpart.
Therefore, a precomputing step is defined wherein a Lennard-Jones expression of the form:
$\tilde{P_M}(Z)= \tilde{A_r} \big[ \big(\frac{\tilde{\Delta^*}}{2 \cdot Z + \Delta(Q_m)}\big)^\tilde{x} - \big(\frac{\tilde{\Delta^*}}{2 \cdot Z + \Delta(Q_m)}\big)^\tilde{y} \big]$
is fitted to the integrated profile and then used as a new predictor of intermolecular pressure during the iterative numerical resolution.
### Imports
```
import logging
import time
import numpy as np
import matplotlib.pyplot as plt
from PySONIC.utils import logger, rmse, rsquared
from PySONIC.neurons import getPointNeuron
from PySONIC.core import BilayerSonophore, PmCompMethod, AcousticDrive
from PySONIC.constants import *
# Set logging level
logger.setLevel(logging.INFO)
```
### Functions
```
def plotPmavg(bls, Z, fs=15):
fig, ax = plt.subplots(figsize=(5, 3))
for skey in ['right', 'top']:
ax.spines[skey].set_visible(False)
ax.set_xlabel('Z (nm)', fontsize=fs)
ax.set_ylabel('Pressure (kPa)', fontsize=fs)
ax.set_xticks([0, bls.a * 1e9])
ax.set_xticklabels(['0', 'a'])
ax.set_yticks([-10, 0, 40])
ax.set_ylim([-10, 50])
for item in ax.get_xticklabels() + ax.get_yticklabels():
item.set_fontsize(fs)
ax.plot(Z * 1e9, bls.v_PMavg(Z, bls.v_curvrad(Z), bls.surface(Z)) * 1e-3, label='$P_m$')
ax.plot(Z * 1e9, bls.PMavgpred(Z) * 1e-3, label='$P_{m,approx}$')
ax.axhline(y=0, color='k')
ax.legend(fontsize=fs, frameon=False)
fig.tight_layout()
def plotZprofiles(bls, US_source, Q, fs=15):
# Run simulations with integrated and predicted intermolecular pressure
t0 = time.perf_counter()
data1, _ = bls.simulate(US_source, Qm, Pm_comp_method=PmCompMethod.direct)
tcomp_direct = time.perf_counter() - t0
print(f'computation time with direct Pm: {tcomp_direct} s')
Z1 = data1['Z'].values[-NPC_DENSE:] * 1e9 # nm
t0 = time.perf_counter()
data2, _ = bls.simulate(US_source, Qm, Pm_comp_method=PmCompMethod.predict)
tcomp_predict = time.perf_counter() - t0
print(f'computation time with predicted Pm: {tcomp_predict} s')
Z2 = data2['Z'].values[-NPC_DENSE:] * 1e9 # nm
tcomp_ratio = tcomp_direct / tcomp_predict
# Plot figure
t = np.linspace(0, US_source.periodicity, US_source.nPerCycle) * 1e6 # us
fig, ax = plt.subplots(figsize=(5, 3))
for skey in ['right', 'top']:
ax.spines[skey].set_visible(False)
ax.set_xlabel('time (us)', fontsize=fs)
ax.set_ylabel('Deflection (nm)', fontsize=fs)
ax.set_xticks([t[0], t[-1]])
for item in ax.get_xticklabels() + ax.get_yticklabels():
item.set_fontsize(fs)
ax.plot(t, Z1, label='$P_m$')
ax.plot(t, Z2, label='$P_{m,approx}$')
ax.axhline(y=0, color='k')
ax.legend(fontsize=fs, frameon=False)
fig.tight_layout()
return fig, Z1, Z2, tcomp_ratio
```
### Parameters
```
pneuron = getPointNeuron('RS')
bls = BilayerSonophore(32e-9, pneuron.Cm0, pneuron.Qm0)
US_source = AcousticDrive(500e3, 100e3)
Qm = pneuron.Qm0
```
### Profiles comparison over deflection range
```
Z = np.linspace(-0.4 * bls.Delta_, bls.a, 1000)
fig = plotPmavg(bls, Z)
```
### Error quantification over a typical acoustic cycle
```
fig, Z1, Z2, tcomp_ratio = plotZprofiles(bls, US_source, Qm)
error_Z = rmse(Z1, Z2)
r2_Z = rsquared(Z1, Z2)
err_pct = error_Z / (Z1.max() - Z1.min()) * 1e2
print(f'Z-error: R2 = {r2_Z:.4f}, RMSE = {error_Z:.4f} nm ({err_pct:.4f}% dZ)')
print(f'computational boost: {tcomp_ratio:.1f}-fold')
```
As we can see, this simplification allows to reduce computation times by more than one order of magnitude, without significantly affecting the resulting deflection profiles.
|
github_jupyter
|
---
## BigDL-Nano Resnet example on CIFAR10 dataset
---
This example illustrates how to apply bigdl-nano optimizations on a image recognition case based on pytorch-lightning framework. The basic image recognition module is implemented with Lightning and trained on [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) image recognition Benchmark dataset.
```
from time import time
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from pl_bolts.datamodules import CIFAR10DataModule
from pl_bolts.transforms.dataset_normalizations import cifar10_normalization
from pytorch_lightning import LightningModule, seed_everything
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
from pytorch_lightning.loggers import TensorBoardLogger
from torch.optim.lr_scheduler import OneCycleLR
from torchmetrics.functional import accuracy
from bigdl.nano.pytorch.trainer import Trainer
from bigdl.nano.pytorch.vision import transforms
```
### CIFAR10 Data Module
---
Import the existing data module from bolts and modify the train and test transforms.
You could access [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) for a view of the whole dataset.
```
def prepare_data(data_path, batch_size, num_workers):
train_transforms = transforms.Compose(
[
transforms.RandomCrop(32, 4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
cifar10_normalization()
]
)
test_transforms = transforms.Compose(
[
transforms.ToTensor(),
cifar10_normalization()
]
)
cifar10_dm = CIFAR10DataModule(
data_dir=data_path,
batch_size=batch_size,
num_workers=num_workers,
train_transforms=train_transforms,
test_transforms=test_transforms,
val_transforms=test_transforms
)
return cifar10_dm
```
### Resnet
---
Modify the pre-existing Resnet architecture from TorchVision. The pre-existing architecture is based on ImageNet images (224x224) as input. So we need to modify it for CIFAR10 images (32x32).
```
def create_model():
model = torchvision.models.resnet18(pretrained=False, num_classes=10)
model.conv1 = nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
model.maxpool = nn.Identity()
return model
```
### Lightning Module
---
Check out the [configure_optimizers](https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#configure-optimizers) method to use custom Learning Rate schedulers. The OneCycleLR with SGD will get you to around 92-93% accuracy in 20-30 epochs and 93-94% accuracy in 40-50 epochs. Feel free to experiment with different LR schedules from https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
```
class LitResnet(LightningModule):
def __init__(self, learning_rate=0.05, steps_per_epoch=45000 , batch_size=32):
super().__init__()
self.save_hyperparameters()
self.model = create_model()
def forward(self, x):
out = self.model(x)
return F.log_softmax(out, dim=1)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
self.log("train_loss", loss)
return loss
def evaluate(self, batch, stage=None):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
if stage:
self.log(f"{stage}_loss", loss, prog_bar=True)
self.log(f"{stage}_acc", acc, prog_bar=True)
def validation_step(self, batch, batch_idx):
self.evaluate(batch, "val")
def test_step(self, batch, batch_idx):
self.evaluate(batch, "test")
def configure_optimizers(self):
optimizer = torch.optim.SGD(
self.parameters(),
lr=self.hparams.learning_rate,
momentum=0.9,
weight_decay=5e-4,
)
steps_per_epoch = self.hparams.steps_per_epoch // self.hparams.batch_size
scheduler_dict = {
"scheduler": OneCycleLR(
optimizer,
0.1,
epochs=self.trainer.max_epochs,
steps_per_epoch=steps_per_epoch,
),
"interval": "step",
}
return {"optimizer": optimizer, "lr_scheduler": scheduler_dict}
seed_everything(7)
PATH_DATASETS = os.environ.get("PATH_DATASETS", ".")
BATCH_SIZE = 32
NUM_WORKERS = 0
data_module = prepare_data(PATH_DATASETS, BATCH_SIZE, NUM_WORKERS)
EPOCHS = int(os.environ.get('FIT_EPOCHS', 30))
```
### Train
Use Trainer from bigdl.nano.pytorch.trainer for BigDL-Nano pytorch.
This Trainer extends PyTorch Lightning Trainer by adding various options to accelerate pytorch training.
```
:param num_processes: number of processes in distributed training. default: 4.
:param use_ipex: whether we use ipex as accelerator for trainer. default: True.
:param cpu_for_each_process: A list of length `num_processes`, each containing a list of
indices of cpus each process will be using. default: None, and the cpu will be
automatically and evenly distributed among processes.
```
The next few cells show examples of different parameters.
#### Single Process
---
```
model = LitResnet(learning_rate=0.05)
model.datamodule = data_module
checkpoint_callback = ModelCheckpoint(dirpath="checkpoints/", save_top_k=1, monitor="val_loss", filename="renet18_single_none")
basic_trainer = Trainer(num_processes = 1,
use_ipex = False,
progress_bar_refresh_rate=10,
max_epochs=EPOCHS,
logger=TensorBoardLogger("lightning_logs/", name="basic"),
callbacks=[LearningRateMonitor(logging_interval="step"), checkpoint_callback])
start = time()
basic_trainer.fit(model, datamodule=data_module)
basic_fit_time = time() - start
outputs = basic_trainer.test(model, datamodule=data_module)
basic_acc = outputs[0]['test_acc'] * 100
```
### Single Process with IPEX
---
```
model = LitResnet(learning_rate=0.05)
model.datamodule = data_module
checkpoint_callback = ModelCheckpoint(dirpath="checkpoints/", save_top_k=1, monitor="val_loss", filename="renet18_single_ipex", save_weights_only=True)
single_ipex_trainer = Trainer(num_processes=1,
use_ipex = True,
distributed_backend="subprocess",
progress_bar_refresh_rate=10,
max_epochs=EPOCHS,
logger=TensorBoardLogger("lightning_logs/", name="single_ipex"),
callbacks=[LearningRateMonitor(logging_interval="step"), checkpoint_callback])
start = time()
single_ipex_trainer.fit(model, datamodule=data_module)
single_ipex_fit_time = time() - start
outputs = single_ipex_trainer.test(model, datamodule=data_module)
single_ipex_acc = outputs[0]['test_acc'] * 100
```
### Multiple Processes with IPEX
---
```
model = LitResnet(learning_rate=0.1, batch_size=64)
model.datamodule = data_module
checkpoint_callback = ModelCheckpoint(dirpath="checkpoints/", save_top_k=1, monitor="val_loss", filename="renet18_multi_ipex", save_weights_only=True)
multi_ipex_trainer = Trainer(num_processes=2,
use_ipex=True,
distributed_backend="subprocess",
progress_bar_refresh_rate=10,
max_epochs=EPOCHS,
logger=TensorBoardLogger("lightning_logs/", name="multi_ipx"),
callbacks=[LearningRateMonitor(logging_interval="step"), checkpoint_callback])
start = time()
multi_ipex_trainer.fit(model, datamodule=data_module)
multi_ipex_fit_time = time() - start
outputs = multi_ipex_trainer.test(model, datamodule=data_module)
multi_ipex_acc = outputs[0]['test_acc'] * 100
template = """
| Precision | Fit Time(s) | Accuracy(%) |
| Basic | {:5.2f} | {:5.2f} |
| Single With Ipex | {:5.2f} | {:5.2f} |
| Multiple With Ipex| {:5.2f} | {:5.2f} |
"""
summary = template.format(
basic_fit_time, basic_acc,
single_ipex_fit_time, single_ipex_acc,
multi_ipex_fit_time, multi_ipex_acc
)
print(summary)
```
|
github_jupyter
|
# Apply CNN Classifier to DESI Spectra and visualize results with gradCAM
Mini-SV2 tiles from February-March 2020:
- https://desi.lbl.gov/trac/wiki/TargetSelectionWG/miniSV2
See also the DESI tile picker with (limited) SV0 tiles from March 2020:
- https://desi.lbl.gov/svn/data/tiles/trunk/
- https://desi.lbl.gov/svn/data/tiles/trunk/SV0.html
```
import sys
sys.path.append('/global/homes/p/palmese/desi/timedomain/desitrip/py/') #Note:change this path as needed!
sys.path.append('/global/homes/p/palmese/desi/timedomain/timedomain/')
from desispec.io import read_spectra, write_spectra
from desispec.spectra import Spectra
from desispec.coaddition import coadd_cameras
from desitarget.cmx.cmx_targetmask import cmx_mask
from desitrip.preproc import rebin_flux, rescale_flux
from desitrip.deltamag import delta_mag
from astropy.io import fits
from astropy.table import Table, vstack, hstack
from glob import glob
from datetime import date
import os
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from tensorflow import keras
mpl.rc('font', size=14)
# Set up BGS target bit selection.
cmx_bgs_bits = '|'.join([_ for _ in cmx_mask.names() if 'BGS' in _])
```
## Select a Date & Tile from matches files
```
matches_filename='matches_DECam.npy'
file_arr=np.load('matches_DECam.npy', allow_pickle=True)
obsdates = file_arr[:,0]
tile_ids = file_arr[:,1]
petal_ids = file_arr[:,2]
target_ids = file_arr[:,3]
tamuids = file_arr[:,6]
# Access redux folder.
zbfiles = []
cafiles = []
redux='/global/project/projectdirs/desi/spectro/redux/daily/tiles'
for tile_id, obsdate, petal_id, targetid in zip(tile_ids[:], obsdates[:], petal_ids[:], target_ids[:]):
tile_id = int(tile_id)
if obsdate < 20210301:
print('Skipping files')
continue
elif obsdate < 20210503:
prefix_in ='/'.join([redux, str(tile_id), str(obsdate)])
else:
prefix_in = '/'.join([redux, 'cumulative', str(tile_id),str(obsdate)])
#print(prefix_in)
if not os.path.isdir(prefix_in):
print('{} does not exist.'.format(prefix_in))
continue
# List zbest and coadd files.
# Data are stored by petal ID.
if obsdate < 20210503:
fileend = '-'.join((str(petal_id), str(tile_id), str(obsdate)))
cafile=sorted(glob('{}/coadd-'.format(prefix_in) + fileend + '*.fits'))
else:
fileend = '-'.join((str(petal_id), str(tile_id)))
cafile=sorted(glob('{}/spectra-'.format(prefix_in) + fileend + '*.fits'))
#print(fileend)
zbfile=sorted(glob('{}/zbest-'.format(prefix_in) + fileend + '*.fits'))
zbfiles.extend(zbfile)
cafiles.extend(cafile)
print(len(zbfiles))
print(len(cafiles))
print(len(tile_ids))
print(len(obsdates))
print(len(petal_ids))
```
## Load the Keras Model
Load a model trained on real or simulated data using the native Keras output format. In the future this could be updated to just load the Keras weights.
```
tfmodel = '/global/homes/l/lehsani/timedomain/desitrip/docs/nb/models_9label_first/6_b65_e200_9label/b65_e200_9label_model'
#tfmodel = '/global/homes/s/sybenzvi/desi/timedomain/desitrip/docs/nb/6label_cnn_restframe'
if os.path.exists(tfmodel):
classifier = keras.models.load_model(tfmodel)
else:
classifier = None
print('Sorry, could not find {}'.format(tfmodel))
if classifier is not None:
classifier.summary()
```
## Loop Through Spectra and Classify
```
# Loop through zbest and coadd files for each petal.
# Extract the fibermaps, ZBEST tables, and spectra.
# Keep only BGS targets passing basic event selection.
allzbest = None
allfmap = None
allwave = None
allflux = None
allivar = None
allmask = None
allres = None
handy_table = []
color_string = 'brz'
count = 0
for cafile, zbfile, targetid, obsdate in zip(cafiles, zbfiles, target_ids, obsdates): # rows[:-1] IS TEMPORARY
# Access data per petal.
print("Accessing file number ",count)
print(cafile,zbfile)
zbest = Table.read(zbfile, 'ZBEST')
idx_zbest = (zbest['TARGETID']==targetid)
targetids = zbest[idx_zbest]['TARGETID']
chi2 = zbest[idx_zbest]['CHI2']
pspectra = read_spectra(cafile)
if obsdate>20210503:
select_nite = pspectra.fibermap['NIGHT'] == obsdate
pspectra = pspectra[select_nite]
cspectra = coadd_cameras(pspectra)
fibermap = cspectra.fibermap
idx_fibermap = (fibermap['TARGETID'] == targetid)
ra = fibermap[idx_fibermap]['TARGET_RA'][0]
dec = fibermap[idx_fibermap]['TARGET_DEC'][0]
handy_table.append((targetid, tamuids[count], ra, dec, tile_ids[count], obsdate))
#print(pspectra.flux)S
# Apply standard event selection.
#isTGT = fibermap['OBJTYPE'] == 'TGT'
#isGAL = zbest['SPECTYPE'] == 'GALAXY'
#& isGAL #isTGT #& isBGS
#exp_id = fibermap['EXPID'] & select # first need to figure out all columns as this fails
#print(select)
count += 1
# Accumulate spectrum data.
if allzbest is None:
allzbest = zbest[idx_zbest]
allfmap = fibermap[idx_fibermap]
allwave = cspectra.wave[color_string]
allflux = cspectra.flux[color_string][idx_fibermap]
allivar = cspectra.ivar[color_string][idx_fibermap]
allmask = cspectra.mask[color_string][idx_fibermap]
allres = cspectra.resolution_data[color_string][idx_fibermap]
else:
allzbest = vstack([allzbest, zbest[idx_zbest]])
allfmap = vstack([allfmap, fibermap[idx_fibermap]])
allflux = np.vstack([allflux, cspectra.flux[color_string][idx_fibermap]])
allivar = np.vstack([allivar, cspectra.ivar[color_string][idx_fibermap]])
allmask = np.vstack([allmask, cspectra.mask[color_string][idx_fibermap]])
allres = np.vstack([allres, cspectra.resolution_data[color_string][idx_fibermap]])
# Apply the DESITRIP preprocessing to selected spectra.
rewave, reflux, reivar = rebin_flux(allwave, allflux, allivar, allzbest['Z'],
minwave=2500., maxwave=9500., nbins=150,
log=True, clip=True)
rsflux = rescale_flux(reflux)
# Run the classifier on the spectra.
# The output layer uses softmax activation to produce an array of label probabilities.
# The classification is based on argmax(pred).
pred = classifier.predict(rsflux)
allflux.shape
pred.shape
ymax = np.max(pred, axis=1)
#print(ymax)
#handy_table.pop(0)
print('targetid', '(ra, dec)', 'tileid', 'obsdate', 'row - prob', sep=", ")
for i in range(len(ymax)):
print(handy_table[i], "-", round(ymax[i],2)) #print(handy_table)
fig, ax = plt.subplots(1,1, figsize=(6,4), tight_layout=True)
ax.hist(ymax, bins=np.linspace(0,1,51))
ax.set(xlabel='$\max{(y_\mathrm{pred})}$',
ylabel='count');
#title='Tile {}, {}'.format(tile_id, obsdate));
```
### Selection on Classifier Output
To be conservative we can select only spectra where the classifier is very confident in its output, e.g., ymax > 0.99. See the [CNN training notebook](https://github.com/desihub/timedomain/blob/master/desitrip/docs/nb/cnn_multilabel-restframe.ipynb) for the motivation behind this cut.
```
idx = np.argwhere(ymax > 0.0) #0.99
labels = np.argmax(pred, axis=1)
idx.shape
label_names = ['Galaxy',
'SN Ia',
'SN Ib',
'SN Ib/c',
'SN Ic',
'SN IIn',
'SN IIL/P',
'SN IIP',
'KN']
```
### Save spectra and classification to file
```
# Save classification info to a table.
classification = Table()
classification['TARGETID'] = allfmap[idx]['TARGETID']
classification['CNNPRED'] = pred[idx]
classification['CNNLABEL'] = label_names_arr[labels[idx]]
# Merge the classification and redrock fit to the fibermap.
#Temporary fix for candidate mismatch
fmap = hstack([allfmap[idx], allzbest[idx], classification])
fmap['TARGETID_1'].name='TARGETID'
fmap.remove_columns(['TARGETID_2','TARGETID_3'])
# Pack data into Spectra and write to FITS.
cand_spectra = Spectra(bands=['brz'],
wave={'brz' : allwave},
flux={'brz' : allflux[idx]},
ivar={'brz' : allivar[idx]},
mask={'brz' : allmask[idx]},
resolution_data={'brz' : allres[idx]},
fibermap=fmap
)
outfits = 'DECam_transient_spectra.fits'
write_spectra(outfits, cand_spectra)
print('Output file saved in {}'.format(outfits))
```
### GradCAM action happens here
Adapting from https://keras.io/examples/vision/grad_cam/
```
import tensorflow as tf
last_conv_layer_name = "conv1d_23"
classifier_layer_names = [
"batch_normalization_23",
"activation_23",
"max_pooling1d_23",
"flatten_5",
"dense_5",
"dropout_5",
"Output_Classes"
]
def make_gradcam_heatmap(
img_array, model, last_conv_layer_name, classifier_layer_names
):
# First, we create a model that maps the input image to the activations
# of the last conv layer
last_conv_layer = model.get_layer(last_conv_layer_name)
last_conv_layer_model = keras.Model(model.inputs, last_conv_layer.output)
# Second, we create a model that maps the activations of the last conv
# layer to the final class predictions
classifier_input = keras.Input(shape=last_conv_layer.output.shape[1:])
x = classifier_input
for layer_name in classifier_layer_names:
#print(layer_name,x.shape)
x = model.get_layer(layer_name)(x)
classifier_model = keras.Model(classifier_input, x)
# Then, we compute the gradient of the top predicted class for our input image
# with respect to the activations of the last conv layer
with tf.GradientTape() as tape:
# Compute activations of the last conv layer and make the tape watch it
last_conv_layer_output = last_conv_layer_model(img_array)
tape.watch(last_conv_layer_output)
# Compute class predictions
preds = classifier_model(last_conv_layer_output)
top_pred_index = tf.argmax(preds[0])
top_class_channel = preds[:, top_pred_index]
# This is the gradient of the top predicted class with regard to
# the output feature map of the last conv layer
grads = tape.gradient(top_class_channel, last_conv_layer_output)
# This is a vector where each entry is the mean intensity of the gradient
# over a specific feature map channel
pooled_grads = tf.reduce_mean(grads, axis=(0, 1))
#print(grads.shape,pooled_grads.shape)
# We multiply each channel in the feature map array
# by "how important this channel is" with regard to the top predicted class
last_conv_layer_output = last_conv_layer_output.numpy()[0]
pooled_grads = pooled_grads.numpy()
for i in range(pooled_grads.shape[-1]):
last_conv_layer_output[:, i] *= pooled_grads[i]
# The channel-wise mean of the resulting feature map
# is our heatmap of class activation
heatmap = np.mean(last_conv_layer_output, axis=-1)
#We apply ReLU here and select only elements>0
# For visualization purpose, we will also normalize the heatmap between 0 & 1
heatmap = np.maximum(heatmap, 0) / np.max(heatmap)
return heatmap
```
### Apply GradCAM to all spectra classified as transients
```
#allzbest = allzbest[1:] #TEMP
#allzbest.pprint_all()
#print(labels.shape)
#print(labels)
#print(rewave.shape)
#print(rsflux.shape)
preprocess_input = keras.applications.xception.preprocess_input
decode_predictions = keras.applications.xception.decode_predictions
# Loop over all and create a bunch of 16x16 plots
fig, axes = plt.subplots(4,4, figsize=(15,10), sharex=True, sharey=True,
gridspec_kw={'wspace':0, 'hspace':0})
for j, ax in zip(selection[:16], axes.flatten()):
myarr=rsflux[j,:]
#print()
# Print what the top predicted class is
preds = classifier.predict(myarr)
#print("Predicted:", preds)
# Generate class activation heatmap
heatmap = make_gradcam_heatmap(
myarr, classifier, last_conv_layer_name, classifier_layer_names
)
color='blue'
rewave_nbin_inblock=rewave.shape[0]/float(heatmap.shape[0])
first_bin=0
for i in range(1,heatmap.shape[0]+1):
alpha=np.min([1,heatmap[i-1]+0.2])
last_bin=int(i*rewave_nbin_inblock)
if (i==1):
ax.plot(rewave[first_bin:last_bin+1], myarr[0,first_bin:last_bin+1],c=color,alpha=alpha,\
label = str(allzbest[j[0]]['TARGETID']) + "\n" +
label_names[labels[j[0]]] +
'\nz={:.2f}'.format(allzbest[j[0]]['Z']) +
'\ntype={}'.format(allzbest[j[0]]['SPECTYPE']) +
'\nprob={:.2f}'.format(ymax[j[0]]))
else:
ax.plot(rewave[first_bin:last_bin+1], myarr[0,first_bin:last_bin+1],c=color,alpha=alpha)
first_bin=last_bin
ax.legend(fontsize=10)
```
### Plot spectra of objects classified as transients
Plot observed spectra
```
testwave, testflux, testivar = rebin_flux(allwave, allflux, allivar,
minwave=2500., maxwave=9500., nbins=150,
log=True, clip=True)
fig, axes = plt.subplots(4,4, figsize=(15,10), sharex=True, sharey=True,
gridspec_kw={'wspace':0, 'hspace':0})
for j, ax in zip(selection, axes.flatten()):
ax.plot(testwave, testflux[j[0]], alpha=0.7, label='label: '+label_names[labels[j[0]]] +# Just this for single plot with [0] on testflux, label_names, allzbest
'\nz={:.2f}'.format(allzbest[j[0]]['Z'])) # +
#'\nobsdate={}'.format(obsdates[j[0]]) +
#'\ntile id: {}'.format(tile_ids[j[0]]) +
#'\npetal id: {}'.format(petal_ids[j[0]]))
ax.set(xlim=(3500,9900),ylim=(-0.1,4))
#ax.fill_between([5600,6000],[-0.1,-0.1],[4,4],alpha=0.1,color='blue')
#ax.fill_between([7400,7800],[-0.1,-0.1],[4,4],alpha=0.1,color='blue')
ax.legend(fontsize=10)
#for k in [0,1,2]:
# axes[k,0].set(ylabel=r'flux [erg s$^{-1}$ cm$^{-1}$ $\AA^{-1}$]')
# axes[2,k].set(xlabel=r'$\lambda_\mathrm{obs}$ [$\AA$]', xlim=(3500,9900))
fig.tight_layout();
#filename = "spectra_plots/all_spectra_TAMU_ylim"
#plt.savefig(filename)
```
### For plotting individual plots
```
# Save to FITS files rather than PNG, see cnn_classify_data.py
# See line 404 - 430, 'Save Classification info to file'
#https://github.com/desihub/timedomain/blob/ab7257a4ed232875f5769cbb11c21f483ceccc5e/cronjobs/cnn_classify_data.py#L404
for j in selection:
plt.plot(testwave, testflux[j[0]], alpha=0.7, label='label: '+ label_names[labels[j[0]]] + # Just this for single plot with [0] on testflux, label_names, allzbest
#'\nz={:.2f}'.format(allzbest[j[0]]['Z']) +
'\nprob={:.2f}'.format(ymax[j[0]]))
#'\nobsdate={}'.format(obsdates[j[0]]) +
#'\ntile id: {}'.format(tile_ids[j[0]]) +
#'\npetal id: {}'.format(petal_ids[j[0]]))
plt.xlim(3500, 9900)
#plt.ylim(-0.1, 50)
#ax.fill_between([5600,6000],[-0.1,-0.1],[4,4],alpha=0.1,color='blue')
#ax.fill_between([7400,7800],[-0.1,-0.1],[4,4],alpha=0.1,color='blue')
plt.legend(fontsize=10)
filename = "spectra_plots/"+"_".join(("TAMU", "spectra", str(allzbest[j[0]]['TARGETID']), str(obsdates[j[0]]), str(tile_ids[j[0]]), str(petal_ids[j[0]]), label_names[labels[j[0]]].replace(" ", "-").replace("/","-")))
#filename = "spectra_plots/"+"_".join(("TAMU", "spectra", str(obsdates[j[0]+1]), str(tile_ids[j[0]+1]), str(petal_ids[j[0]+1]), label_names[labels[j[0]]].replace(" ", "-"))) # temp
#plt.show();
#print(filename)
plt.savefig(filename)
plt.clf()
#for k in [0,1,2]:
# axes[k,0].set(ylabel=r'flux [erg s$^{-1}$ cm$^{-1}$ $\AA^{-1}$]')
# axes[2,k].set(xlabel=r'$\lambda_\mathrm{obs}$ [$\AA$]', xlim=(3500,9900))
#fig.tight_layout();
#filename = "spectra_plots/all_spectra_TAMU_ylim"
#filename = "_".join(("spectra", str(obsdate), str(tile_id), label_names[labels[0]].replace(" ", "-")))
#plt.savefig(filename)
```
### Reading files in parallel
Does not work
```
# Loop through zbest and coadd files for each petal.
# Extract the fibermaps, ZBEST tables, and spectra.
# Keep only BGS targets passing basic event selection.
allzbest = None
allfmap = None
allwave = None
allflux = None
allivar = None
allmask = None
allres = None
handy_table = []
from joblib import Parallel, delayed
njobs=40
color_string = 'brz'
def get_spectra(cafile, zbfile,targetid,tamuid,obsdate,tileid):
# Access data per petal.
print("Accessing file number ",count)
print(cafile,zbfile)
zbest = Table.read(zbfile, 'ZBEST')
idx_zbest = (zbest['TARGETID']==targetid)
targetids = zbest[idx_zbest]['TARGETID']
chi2 = zbest[idx_zbest]['CHI2']
pspectra = read_spectra(cafile)
if obsdate>20210503:
select_nite = pspectra.fibermap['NIGHT'] == obsdate
pspectra = pspectra[select_nite]
cspectra = coadd_cameras(pspectra)
fibermap = cspectra.fibermap
idx_fibermap = (fibermap['TARGETID'] == targetid)
ra = fibermap[idx_fibermap]['TARGET_RA'][0]
dec = fibermap[idx_fibermap]['TARGET_DEC'][0]
return allzbest,allfmap, allwave, allflux, allivar, allmask, allres
allzbest,allfmap, allwave, allflux, allivar, allmask, allres = \
Parallel(n_jobs=njobs)(delayed(get_spectra)(cafile, zbfile,targetid,tamuid,obsdate,tileid) \
for cafile, zbfile,targetid,tamuid,obsdate,tileid in zip(cafiles, zbfiles, target_ids,tamuids, obsdates,tile_ids))
```
|
github_jupyter
|
```
!pip install vaderSentiment
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
# Initialize VADER so we can use it later
sentimentAnalyser = SentimentIntensityAnalyzer()
import pandas as pd
pd.options.display.max_colwidth = 400
# Read in text file
text = open("../Data/Periodical-text-files-single-pages/AmSn18860101-V01-01-page-1.txt").read()
# Replace line breaks with spaces
text = text.replace('\n', ' ')
!pip install nltk
import nltk
nltk.download('punkt')
nltk.sent_tokenize(text)
for number, sentence in enumerate(nltk.sent_tokenize(text)):
print(number, sentence)
# Break text into sentences
sentences = nltk.sent_tokenize(text)
# Make empty list
sentence_scores = []
# Get each sentence and sentence number, which is what enumerate does
for number, sentence in enumerate(sentences):
# Use VADER to calculate sentiment
scores = sentimentAnalyser.polarity_scores(sentence)
# Make dictionary and append it to the previously empty list
sentence_scores.append({'sentence': sentence, 'sentence_number': number+1, 'sentiment_score': scores['compound']})
pd.DataFrame(sentence_scores)
# Assign DataFrame to variable red_df
AmSn18860101_V01_01_page_1_df = pd.DataFrame(sentence_scores)
# Sort by the column "sentiment_score" and slice for first 10 values
AmSn18860101_V01_01_page_1_df.sort_values(by='sentiment_score')[:10]
# Sort by the column "sentiment_score," this time in descending order, and slice for first 10 values
AmSn18860101_V01_01_page_1_df.sort_values(by='sentiment_score', ascending=False)[:10]
AmSn18860101_V01_01_page_1_df['sentiment_score'].plot();
import matplotlib.pyplot as plt
ax = AmSn18860101_V01_01_page_1_df['sentiment_score'].plot(x='sentence_number', y='sentiment_score', kind='line',
figsize=(10,5), rot=90, title='Sentiment in "AmSn18860101-V01-01-page-1"')
# Plot a horizontal line at 0
plt.axhline(y=0, color='orange', linestyle='-');
# Get averages for a rolling window, then plot
AmSn18860101_V01_01_page_1_df.rolling(5)['sentiment_score'].mean().plot(x='sentence_number', y='sentiment_score', kind='line',
figsize=(10,5), rot=90, title='Sentiment in "AmSn18860101-V01-01-page-1"')
# Plot a horizontal line at 0
plt.axhline(y=0, color='orange', linestyle='-');
#Time to create a loop and do it with all of the documents
with open("../Data/Periodical-topic-model-output/top_doc_list.txt", "r") as f:
pope_docs = f.read().split("\n")
pope_docs
directory = "../Data/Periodical-text-files-single-pages/"
import os
# Make empty list
sentence_scores = []
for doc in pope_docs[:-1]:
text = open(os.path.join(directory, f"{doc}.txt")).read()
# Replace line breaks with spaces
text = text.replace('\n', ' ')
# Break text into sentences
sentences = nltk.sent_tokenize(text)
# Get each sentence and sentence number, which is what enumerate does
for number, sentence in enumerate(sentences):
# Use VADER to calculate sentiment
scores = sentimentAnalyser.polarity_scores(sentence)
# Make dictionary and append it to the previously empty list
sentence_scores.append({'sentence': sentence, 'sentence_number': number+1, 'sentiment_score': scores['compound'], 'doc_id': doc})
sentence_scores
# Assign DataFrame to variable red_df
pope_docs_df = pd.DataFrame(sentence_scores)
# Sort by the column "sentiment_score" and slice for first 10 values
pope_docs_df.sort_values(by='sentiment_score')[:10]
# Sort by the column "sentiment_score," this time in descending order, and slice for first 10 values
pope_docs_df.sort_values(by='sentiment_score', ascending=False)[:10]
pope_docs_df[pope_docs_df["doc_id"]=="SOL19030212-V18-07-page-13"]
import pandas as pd
# Adjust the display settings to see more rows
pd.options.display.max_rows = 100
pope_docs_df
pope_docs_df.dtypes
pope_docs_total_df = pope_docs_df.groupby('doc_id')['sentiment_score'].mean()
pope_docs_total_df
final_pope_docs_df = pope_docs_total_df.sort_values(ascending=False)
final_pope_docs_df
#It would be really cool to figure out how to sort this but Stack Overflow did not help me SOS
final_pope_docs_sorted_df = pd.DataFrame({'doc_id':final_pope_docs_df.index, 'sentiment_score':final_pope_docs_df.values})
# Read in text file
text = open("../Data/Periodical-text-files-single-pages/SOL19030212-V18-07-page-13.txt").read()
# Replace line breaks with spaces
text = text.replace('\n', ' ')
nltk.sent_tokenize(text)
for number, sentence in enumerate(nltk.sent_tokenize(text)):
print(number, sentence)
# Break text into sentences
sentences = nltk.sent_tokenize(text)
# Make empty list
sentence_scores = []
# Get each sentence and sentence number, which is what enumerate does
for number, sentence in enumerate(sentences):
# Use VADER to calculate sentiment
scores = sentimentAnalyser.polarity_scores(sentence)
# Make dictionary and append it to the previously empty list
sentence_scores.append({'sentence': sentence, 'sentence_number': number+1, 'sentiment_score': scores['compound']})
pd.DataFrame(sentence_scores)
# Assign DataFrame to variable red_df
SOL19030212_V18_07_page_13_df = pd.DataFrame(sentence_scores)
SOL19030212_V18_07_page_13_df['sentiment_score'].plot();
ax = SOL19030212_V18_07_page_13_df['sentiment_score'].plot(x='sentence_number', y='sentiment_score', kind='line',
figsize=(10,5), rot=90, title='Sentiment in SOL19030212-V18-07-page-13')
# Plot a horizontal line at 0
plt.axhline(y=0, color='orange', linestyle='-');
# Get averages for a rolling window, then plot
SOL19030212_V18_07_page_13_df.rolling(5)['sentiment_score'].mean().plot(x='sentence_number', y='sentiment_score', kind='line',
figsize=(10,5), rot=90, title='Sentiment in "Sentiment in SOL19030212-V18-07-page-13"')
# Plot a horizontal line at 0
plt.axhline(y=0, color='orange', linestyle='-');
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import os
import numpy as np, pandas as pd
import matplotlib.pyplot as plt, seaborn as sns
from tqdm import tqdm, tqdm_notebook
from pathlib import Path
# pd.set_option('display.max_columns', 1000)
# pd.set_option('display.max_rows', 400)
sns.set()
os.chdir('..')
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from project.ranker.ranker import RankingPredictor
%%time
rp = Pipeline([
('scale', StandardScaler()),
('estimator', RankingPredictor("ma_100", n_neighbors=15)),
])
df_mf, df_rank, df_scores, df_fold_scores = rp.named_steps['estimator'].get_data()
from sklearn.model_selection import train_test_split
X_train, _, y_train, _, y_scores_train, _ = train_test_split(df_mf.values,
df_rank.values,
df_scores.values,
test_size=0)
X_train.shape, y_train.shape
%%time
import lightgbm
from project.ranker.ltr_rankers import cv_lgbm
from sklearn.model_selection import RepeatedKFold
kfolds = RepeatedKFold(10, n_repeats=2, random_state=42)
params = {'objective': 'lambdarank',
'metric': 'ndcg',
'learning_rate': 1e-3,
# 'num_leaves': 50,
'ndcg_at': y_train.shape[1],
'min_data_in_leaf': 3,
'min_sum_hessian_in_leaf': 1e-4}
results, models = cv_lgbm(lightgbm, X_train, y_train.shape[1] - y_train + 1, y_scores_train, kfolds,
params, num_rounds=1000, early_stopping_rounds=50, verbose_eval=False)
%%time
import lightgbm
from project.ranker.ltr_rankers import cv_lgbm
from sklearn.model_selection import RepeatedKFold
kfolds = RepeatedKFold(10, n_repeats=10, random_state=42)
params = {'objective': 'lambdarank',
'metric': 'ndcg',
'learning_rate': 1e-3,
# 'num_leaves': 50,
'ndcg_at': y_train.shape[1],
'min_data_in_leaf': 3,
'min_sum_hessian_in_leaf': 1e-4}
results, models = cv_lgbm(lightgbm, X_train, y_train.shape[1] - y_train + 1, y_scores_train, kfolds,
params, num_rounds=1000, early_stopping_rounds=50, verbose_eval=False)
from project.ranker.ltr_rankers import wide2long
X, y = wide2long(X_train, y_train)
X.shape, y.shape
from scipy.stats import rankdata
y_pred = np.array([rankdata(models[0].predict(wide2long(x_[None,:], y_[None,:])[0]),
method='ordinal') for x_, y_ in zip(X_train, y_train)])
y_pred.shape
13 - y_train[0] + 1
y_pred[0]
from scipy.stats import spearmanr
spearmanr(13 - y_train[0] + 1, y_pred[0])
```
## Ranking, Regression, Classification
```
%%time
import lightgbm
from project.ranker.ltr_rankers import cv_lgbm
from sklearn.model_selection import RepeatedKFold
kfolds = RepeatedKFold(10, n_repeats=2, random_state=42)
params = {'objective': 'lambdarank',
'metric': 'ndcg',
'learning_rate': 1e-3,
# 'num_leaves': 50,
'ndcg_at': y_train.shape[1],
'min_data_in_leaf': 3,
'min_sum_hessian_in_leaf': 1e-4}
results, models = cv_lgbm(lightgbm, X_train, y_train.shape[1] - y_train + 1, y_scores_train, kfolds,
params, num_rounds=1000, early_stopping_rounds=50, verbose_eval=False)
%%time
import lightgbm
from project.ranker.ltr_rankers import cv_lgbm
from sklearn.model_selection import RepeatedKFold
kfolds = RepeatedKFold(10, n_repeats=2, random_state=42)
params = {'objective': 'regression',
'metric': 'ndcg',
'learning_rate': 1e-3,
# 'num_leaves': 50,
'ndcg_at': y_train.shape[1],
'min_data_in_leaf': 3,
'min_sum_hessian_in_leaf': 1e-4}
results, models = cv_lgbm(lightgbm, X_train, y_train.shape[1] - y_train + 1, y_scores_train, kfolds,
params, num_rounds=1000, early_stopping_rounds=50, verbose_eval=False)
%%time
import lightgbm
from project.ranker.ltr_rankers import cv_lgbm
from sklearn.model_selection import RepeatedKFold
kfolds = RepeatedKFold(10, n_repeats=2, random_state=42)
params = {'objective': 'binary',
'metric': 'ndcg',
'learning_rate': 1e-3,
# 'num_leaves': 50,
'ndcg_at': y_train.shape[1],
'min_data_in_leaf': 3,
'min_sum_hessian_in_leaf': 1e-4}
results, models = cv_lgbm(lightgbm, X_train, y_train.shape[1] - y_train + 1, y_scores_train, kfolds,
params, num_rounds=1000, early_stopping_rounds=50, verbose_eval=False)
```
## 10 runs - 10 folds
```
%%time
import lightgbm
from project.ranker.ltr_rankers import cv_lgbm
from sklearn.model_selection import RepeatedKFold
kfolds = RepeatedKFold(10, n_repeats=10, random_state=42)
params = {'objective': 'lambdarank',
'metric': 'ndcg',
'learning_rate': 1e-3,
# 'num_leaves': 50,
'ndcg_at': y_train.shape[1],
'min_data_in_leaf': 3,
'min_sum_hessian_in_leaf': 1e-4}
results, models = cv_lgbm(lightgbm, X_train, y_train.shape[1] - y_train + 1, y_scores_train, kfolds,
params, num_rounds=1000, early_stopping_rounds=50, verbose_eval=False)
%%time
import lightgbm
from project.ranker.ltr_rankers import cv_lgbm
from sklearn.model_selection import RepeatedKFold
kfolds = RepeatedKFold(10, n_repeats=10, random_state=42)
params = {'objective': 'regression',
'metric': 'ndcg',
'learning_rate': 1e-3,
# 'num_leaves': 50,
'ndcg_at': y_train.shape[1],
'min_data_in_leaf': 3,
'min_sum_hessian_in_leaf': 1e-4}
results, models = cv_lgbm(lightgbm, X_train, y_train.shape[1] - y_train + 1, y_scores_train, kfolds,
params, num_rounds=1000, early_stopping_rounds=50, verbose_eval=False)
%%time
import lightgbm
from project.ranker.ltr_rankers import cv_lgbm
from sklearn.model_selection import RepeatedKFold
kfolds = RepeatedKFold(10, n_repeats=10, random_state=42)
params = {'objective': 'binary',
'metric': 'ndcg',
'learning_rate': 1e-3,
# 'num_leaves': 50,
'ndcg_at': y_train.shape[1],
'min_data_in_leaf': 3,
'min_sum_hessian_in_leaf': 1e-4}
results, models = cv_lgbm(lightgbm, X_train, y_train.shape[1] - y_train + 1, y_scores_train, kfolds,
params, num_rounds=1000, early_stopping_rounds=50, verbose_eval=False)
# LambdaRank
Trn_Spearman: 0.3714 +/-0.0634 | Val_Spearman: 0.0922 +/-0.1403
Trn_ACCLoss: 0.0254 +/-0.0194 | Val_ACCLoss: 0.0921 +/-0.0564
Trn_NDCG: 0.7845 +/-0.0671 | Val_NDCG: 0.6157 +/-0.0555
# Regression
Trn_Spearman: 0.3531 +/-0.1215 | Val_Spearman: 0.1699 +/-0.1319
Trn_ACCLoss: 0.0847 +/-0.0324 | Val_ACCLoss: 0.1166 +/-0.0723
Trn_NDCG: 0.6294 +/-0.0510 | Val_NDCG: 0.5891 +/-0.0665
# Binary Classification
Trn_Spearman: -0.0821 +/-0.0170 | Val_Spearman: -0.0821 +/-0.1533
Trn_ACCLoss: 0.1172 +/-0.0064 | Val_ACCLoss: 0.1172 +/-0.0573
Trn_NDCG: 0.5275 +/-0.0054 | Val_NDCG: 0.5275 +/-0.0486
```
## New ranking
```
%%time
from sklearn.model_selection import train_test_split
rp = Pipeline([
('scale', StandardScaler()),
('estimator', RankingPredictor("ma_100", n_neighbors=15)),
])
df_mf, df_rank, df_scores, _ = rp.named_steps['estimator'].get_data()
df_mf = df_mf.sort_index()
df_rank = df_rank.sort_index()
df_scores = df_scores.sort_index()
X_train, _, y_train, _, y_scores_train, _ = train_test_split(df_mf.values,
df_rank.values,
df_scores.values,
test_size=0,
random_state=42)
print(X_train.shape, y_train.shape, y_scores_train.shape)
df_mf.head()
df_rank.head()
df_scores.head()
%%time
import lightgbm
from project.ranker.ltr_rankers import cv_lgbm
from sklearn.model_selection import RepeatedKFold
kfolds = RepeatedKFold(10, n_repeats=2, random_state=42)
params = {'objective': 'lambdarank',
'metric': 'ndcg',
'learning_rate': 1e-3,
# 'num_leaves': 50,
'ndcg_at': y_train.shape[1],
'min_data_in_leaf': 3,
'min_sum_hessian_in_leaf': 1e-4}
results, models = cv_lgbm(lightgbm, X_train, y_train, y_scores_train, kfolds,
params, num_rounds=1000, early_stopping_rounds=50, verbose_eval=False)
from sklearn.model_selection import KFold
from project.ranker.ltr_rankers import cv_random
from project.ranker.ranker import RandomRankingPredictor
rr = RandomRankingPredictor()
kfolds = RepeatedKFold(10, n_repeats=2, random_state=42)
results = cv_random(rr, X_train, y_train, y_scores_train, kfolds)
lightgbm.plot_importance(models[0], figsize=(5,10))
from project.ranker.ltr_rankers import wide2long
X, y = wide2long(X_train, y_train)
X.shape, y.shape
X_train[0]
```
|
github_jupyter
|
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import itertools
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from brew.base import Ensemble, EnsembleClassifier
from brew.stacking.stacker import EnsembleStack, EnsembleStackClassifier
from brew.combination.combiner import Combiner
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from mlxtend.data import wine_data, iris_data
from mlxtend.plotting import plot_decision_regions
from sklearn.base import TransformerMixin, BaseEstimator
from sklearn.ensemble import ExtraTreesClassifier
from itertools import combinations
import random
random.seed(10)
from sklearn.tree import DecisionTreeClassifier
from progress.bar import Bar
import sys, time
try:
from IPython.core.display import clear_output
have_ipython = True
except ImportError:
have_ipython = False
class ProgressBar:
def __init__(self, iterations):
self.iterations = iterations
self.prog_bar = '[]'
self.fill_char = '*'
self.width = 40
self.__update_amount(0)
if have_ipython:
self.animate = self.animate_ipython
else:
self.animate = self.animate_noipython
def animate_ipython(self, iter):
try:
pass
#clear_output()
except Exception:
# terminal IPython has no clear_output
pass
print '\r', self,
#sys.stdout.flush()
self.update_iteration(iter + 1)
def update_iteration(self, elapsed_iter):
self.__update_amount((elapsed_iter / float(self.iterations)) * 100.0)
self.prog_bar += ' %d of %s complete' % (elapsed_iter, self.iterations)
def __update_amount(self, new_amount):
percent_done = int(round((new_amount / 100.0) * 100.0))
all_full = self.width - 2
num_hashes = int(round((percent_done / 100.0) * all_full))
self.prog_bar = '[' + self.fill_char * num_hashes + ' ' * (all_full - num_hashes) + ']'
pct_place = (len(self.prog_bar) / 2) - len(str(percent_done))
pct_string = '%d%%' % percent_done
self.prog_bar = self.prog_bar[0:pct_place] + \
(pct_string + self.prog_bar[pct_place + len(pct_string):])
def __str__(self):
return str(self.prog_bar)
class Multiviewer(BaseEstimator, TransformerMixin):
def __init__(self, max_level=3, num_at_each_level=4, base_estimator=ExtraTreesClassifier(n_estimators=50)):
self.max_level = max_level
self.base_estimator = base_estimator
self.num_at_each_level = num_at_each_level
self.estimators = []
self.estim_features = []
self.classes_ = None
def fit(self, X, y):
if self.max_level > X.shape[1]:
print "Max level of feature combinations can't be bigger than num of features"
print "%d > %d" % (self.max_level, X.shape[1])
raise ValueError
if not(isinstance(self.num_at_each_level, list)):
self.num_at_each_level = [self.num_at_each_level for i in xrange(1, self.max_level)]
self.num_at_each_level = [X.shape[1]] + self.num_at_each_level
#print self.num_at_each_level
self.classes_ = list(set(y))
rang = np.arange(X.shape[1])
total = 0
for i in xrange(1, self.max_level+1):
total += i* self.num_at_each_level[i-1]
print "Will create %d trees!" % total
cc = 0
bar =ProgressBar(total)
for level in xrange(self.max_level):
#print [comb for comb in combinations(rang, level+1)]
wanted_feature_sets = get_cols(rang, level+1, self.num_at_each_level[level] )
for wanted_features in wanted_feature_sets:
c = sklearn.clone(self.base_estimator)
c.fit(X[:, wanted_features], y)
self.estimators.append(c)
#print self.estimators[-1].n_features_
self.estim_features.append(wanted_features)
bar.animate(cc)
cc += 1
#bar.finish()
return self
def predict(self, X):
if not(isinstance(X, np.ndarray)):
X = np.array(X)
print X.shape
print X.shape[0]
predictions = np.empty((X.shape[0], len(self.estimators)))
for i, est in enumerate(self.estimators):
# print est.n_features_
# print i, self.estim_features[i]
# print X.shape
# print X[:, self.estim_features[i]].shape
predictions[:, i] = est.predict(X[:, self.estim_features[i]])
final_pred = []
#print predictions
for sample in xrange(X.shape[0]):
votes = []
for i, mod_vote in enumerate(predictions[sample,:]):
votes.extend([predictions[sample, i] for j in xrange(1)])
final_pred.append(most_common(votes))
return np.array(final_pred).reshape(-1,)
def get_cols(iterable, level_, total_times_):
wanted = [random.sample(iterable, k=level_) for time in xrange(total_times_)]
return wanted
def most_common(lst):
return max(set(lst), key=lst.count)
# Loading some example data
X, y = wine_data()
#X, y = iris_data()
#X = X[:,[0, 2]]
print('Dimensions: %s x %s' % (X.shape[0], X.shape[1]))
print('1st row', X[0])
# Initializing Classifiersa
clf1 = LogisticRegression(random_state=0)
clf2 = RandomForestClassifier(random_state=0)
clf3 = SVC(random_state=0, probability=True)
mu = Multiviewer(max_level=3, num_at_each_level=[10, 10, 10])
# Creating Ensemble
ensemble = Ensemble([clf1, clf2, clf3])
eclf = EnsembleClassifier(ensemble=ensemble, combiner=Combiner('mean'))
# Creating Stacking
layer_1 = Ensemble([clf1, clf2, clf3])
layer_2 = Ensemble([sklearn.clone(clf1)])
stack = EnsembleStack(cv=3)
stack.add_layer(layer_1)
stack.add_layer(layer_2)
sclf = EnsembleStackClassifier(stack)
clf_list = [clf1, clf2, clf3, eclf, sclf, mu]
lbl_list = ['Logistic Regression', 'Random Forest', 'RBF kernel SVM', 'Ensemble', 'Stacking', 'MULTIVIEWER']
# WARNING, WARNING, WARNING
# brew requires classes from 0 to N, no skipping allowed
d = {yi : i for i, yi in enumerate(set(y))}
y = np.array([d[yi] for yi in y])
# Plotting Decision Regions
#gs = gridspec.GridSpec(2, 3)
#fig = plt.figure(figsize=(10, 8))
itt = itertools.product([0, 1, 2], repeat=2)
from sklearn.model_selection import train_test_split
split = 0.2
X_train, X_cv, y_train, y_cv = train_test_split(X, y, test_size=split, stratify=y, random_state=100)
for clf, lab, grd in zip(clf_list, lbl_list, itt):
clf.fit(X_train, y_train)
# ax = plt.subplot(gs[grd[0], grd[1]])
# fig = plot_decision_regions(X=X, y=y, clf=clf, legend=2)
# plt.title(lab)
print "Results for: %s" % lab
pred = clf.predict(X_cv)
print accuracy_score(y_cv, pred, normalize=True)
print confusion_matrix(y_cv, pred, labels=list(set(y)))
print classification_report(y_cv, pred, labels=list(set(y)))
#plt.show()
clf1.fit(X_train, y_train)
clf1.predict(X_cv).shape
from sklearn.base import TransformerMixin, BaseEstimator
from sklearn.ensemble import ExtraTreesClassifier
from itertools import combinations
import random
random.seed(10)
class Multiviewer(BaseEstimator, TransformerMixin):
def __init__(self, max_level=3, num_at_each_level=4, base_estimator=ExtraTreesClassifier(n_estimators=50)):
self.max_level = max_level
self.base_estimator = base_estimator
self.num_at_each_level = num_at_each_level
self.estimators = []
self.estim_features = []
self.classes_ = None
def fit(self, X, y):
if self.max_level > X.shape[1]:
print "Max level of feature combinations can't be bigger than num of features"
print "%d > %d" % (self.max_level, X.shape[1])
raise ValueError
if not(isinstance(self.num_at_each_level, list)):
self.num_at_each_level = [self.num_at_each_level for i in xrange(1, self.max_level)]
self.num_at_each_level = [X.shape[1]] + self.num_at_each_level
#print self.num_at_each_level
self.classes_ = list(set(y))
rang = np.arange(X.shape[1])
total = 0
for i in xrange(1, self.max_level+1):
total += i* self.num_at_each_level[i-1]
print "Will create %d trees!" % total
cc = 0
bar =ProgressBar(total)
for level in xrange(self.max_level):
#print [comb for comb in combinations(rang, level+1)]
wanted_feature_sets = get_cols(rang, level+1, self.num_at_each_level[level])
print wanted_feature_sets
for wanted_features in wanted_feature_sets:
c = sklearn.clone(self.base_estimator)
c.fit(X[:, wanted_features], y)
self.estimators.append(c)
#print self.estimators[-1].n_features_
self.estim_features.append(wanted_features)
bar.animate(cc)
cc += 1
#bar.finish()
return self
def predict(self, X):
if not(isinstance(X, np.ndarray)):
X = np.array(X)
print X.shape
print X.shape[0]
predictions = np.empty((X.shape[0], len(self.estimators)))
for i, est in enumerate(self.estimators):
# print est.n_features_
# print i, self.estim_features[i]
# print X.shape
# print X[:, self.estim_features[i]].shape
predictions[:, i] = est.predict(X[:, self.estim_features[i]])
final_pred = []
#print predictions
for sample in xrange(X.shape[0]):
votes = []
for i, mod_vote in enumerate(predictions[sample,:]):
votes.extend([predictions[sample, i] for j in xrange(1)])
final_pred.append(most_common(votes))
return np.array(final_pred).reshape(-1,)
def get_cols(iterable, level_, total_times_):
wanted = [random.sample(iterable, k=level_) for time in xrange(total_times_)]
return wanted
def most_common(lst):
return max(set(lst), key=lst.count)
print X.shape[1]
mu = Multiviewer(max_level=4)
mu.fit(X,y)
pred = mu.predict(X_cv)
print accuracy_score(y_cv, pred, normalize=True)
print confusion_matrix(y_cv, pred, labels=list(set(y)))
print classification_report(y_cv, pred, labels=list(set(y)))
mu.estimators
rang = np.arange(X.shape[1])
from itertools import combinations
import random
random.seed(10)
ss = random.sample([comb for comb in combinations(rang, 2)],2)
X[:, ss[1]].shape
```
|
github_jupyter
|
```
import pickle
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from tqdm import tqdm_notebook
import time
import gc
import numpy as np
import lightgbm as lgb
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
import torch
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
import torch.nn as nn
from torch.utils.data import Dataset,DataLoader
import torch.nn.functional as F
import sklearn
from sklearn.model_selection import StratifiedKFold,KFold
from sklearn.metrics import roc_curve
import time
import os
import itertools
import random
import matplotlib.pyplot as plt
from collections import OrderedDict
from scipy.special import erfinv
from collections import OrderedDict
from math import sqrt
import numpy as np
from torch.optim import lr_scheduler
from sklearn.ensemble import GradientBoostingRegressor
import catboost as cbt
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, TfidfTransformer
from category_encoders.ordinal import OrdinalEncoder
from category_encoders.woe import WOEEncoder
from category_encoders.target_encoder import TargetEncoder as Encoder
from category_encoders.sum_coding import SumEncoder
from category_encoders.m_estimate import MEstimateEncoder
from category_encoders.leave_one_out import LeaveOneOutEncoder
from category_encoders.helmert import HelmertEncoder
from category_encoders.cat_boost import CatBoostEncoder
from category_encoders import CountEncoder
from category_encoders.one_hot import OneHotEncoder
def seed_everything(seed=42):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_everything()
def reduce_mem(df):
starttime = time.time()
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if pd.isnull(c_min) or pd.isnull(c_max):
continue
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
print('-- Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction),time spend:{:2.2f} min'.format(end_mem,
100*(start_mem-end_mem)/start_mem,
(time.time()-starttime)/60))
return df
def lower_sample_data_by_sample(df,percent=1,rs=42):
most_data = df[df['label'] == 0] # 多数类别的样本
minority_data = df[df['label'] == 1] # 少数类别的样本
#随机采样most_data中的数据
lower_data=most_data.sample(n=int(percent*len(minority_data)),replace=False,random_state=rs,axis=0)
return (pd.concat([lower_data,minority_data]))
def get_mask_train(df,samp):
if random.random()<samp:
return -1
else :
return df
#--------------------------------------------------数据预处理--------------------------------------------------#
columns = [ 'uid', 'task_id', 'adv_id', 'creat_type_cd', 'adv_prim_id',
'dev_id', 'inter_type_cd', 'slot_id', 'spread_app_id', 'tags',
'app_first_class', 'app_second_class', 'age', 'city', 'city_rank',
'device_name', 'device_size', 'career', 'gender', 'net_type',
'residence', 'his_app_size', 'his_on_shelf_time', 'app_score',
'emui_dev', 'list_time', 'device_price', 'up_life_duration',
'up_membership_grade', 'membership_life_duration', 'consume_purchase',
'communication_onlinerate', 'communication_avgonline_30d', 'indu_name',
'pt_d']
# 读取数据集
train_df = reduce_mem(pd.read_csv('train_data.csv',sep='|'))
test_df = pd.read_csv('test_data_B.csv',sep='|')
def get_tfidf(train,test,key1,key2):
train_tif = pd.DataFrame(train[[key1, key2]].groupby([key1])[key2].apply(list))
train_tif.reset_index(inplace=True)
train_key1= train_tif[key1].values
train_key2 = train_tif[key2].values.tolist()
train_key2_list = []
for seq in train_key2:
sentences = []
for word in seq:
sentences.append(str(word))
train_key2_list.append(' '.join(sentences))
tfidf_vec = TfidfVectorizer()
train_tfidf_matrix = tfidf_vec.fit_transform(train_key2_list).toarray()
test_tif = pd.DataFrame(test[[key1, key2]].groupby([key1])[key2].apply(list))
test_tif.reset_index(inplace=True)
test_key1= test_tif[key1].values
test_key2 = test_tif[key2].values.tolist()
test_key2_list = []
for seq in test_key2:
sentences = []
for word in seq:
sentences.append(str(word))
test_key2_list.append(' '.join(sentences))
test_tfidf_matrix = tfidf_vec.transform(test_key2_list).toarray()
assert train_tfidf_matrix.shape[1]==test_tfidf_matrix.shape[1]
train_tfidf_agmax = np.argmax(train_tfidf_matrix,axis=1)
train_tfidf_max = np.max(train_tfidf_matrix,axis=1)
train_tfidf_mean = np.mean(train_tfidf_matrix,axis=1)
train_tfidf_std = np.std(train_tfidf_matrix,axis=1)
test_tfidf_agmax = np.argmax(test_tfidf_matrix,axis=1)
test_tfidf_max = np.max(test_tfidf_matrix,axis=1)
test_tfidf_mean = np.mean(test_tfidf_matrix,axis=1)
test_tfidf_std = np.std(test_tfidf_matrix,axis=1)
print('train_tfidf_agmax.shape:')
print(train_tfidf_agmax.shape)
print('train_tfidf_mean.shape:')
print(train_tfidf_mean.shape)
print('test_tfidf_agmax.shape:')
print(test_tfidf_agmax.shape)
print('test_tfidf_mean.shape:')
print(test_tfidf_mean.shape)
return train_tif,test_tif,train_tfidf_agmax,train_tfidf_max,train_tfidf_mean,train_tfidf_std,test_tfidf_agmax,test_tfidf_max,test_tfidf_mean,test_tfidf_std
# 无用列
drop_cols = ['pt_d','label','communication_onlinerate','index','id','K']
# 选择类别特征
cat_cols = [ 'uid', 'task_id', 'adv_id', 'creat_type_cd', 'adv_prim_id',
'dev_id', 'inter_type_cd', 'slot_id', 'spread_app_id', 'tags',
'app_first_class', 'app_second_class', 'age', 'city', 'city_rank',
'device_name', 'device_size', 'career', 'gender', 'net_type',
'residence', 'his_app_size', 'his_on_shelf_time', 'app_score',
'emui_dev', 'list_time', 'device_price', 'up_life_duration',
'up_membership_grade', 'membership_life_duration', 'consume_purchase'
, 'communication_avgonline_30d', 'indu_name',
]
MASK = 'MASK'
miss_col1 = ['task_id', 'adv_id','uid']
miss_col2 = ['adv_prim_id','dev_id' ]#, 'device_size','spread_app_id','indu_name']
for col in tqdm_notebook( miss_col1):
train_df[col] = train_df[col].apply(lambda x :get_mask_train(x,0.1))
for col in tqdm_notebook(miss_col2):
train_df[col] = train_df[col].apply(lambda x :get_mask_train(x,0.05))
for col in tqdm_notebook( miss_col1):
mask_list = list(set(test_df[col].values)-set(train_df[col].values))
print(len(mask_list)/len(set(test_df[col].values)))
test_df[col] = test_df[col].replace(mask_list,-1)
for col in tqdm_notebook(miss_col2):
mask_list = list(set(test_df[col].values)-set(train_df[col].values))
print(len(mask_list)/len(set(test_df[col].values)))
test_df[col] = test_df[col].replace(mask_list,-1)
train_df.reset_index(drop=True,inplace=True)
user_col = ['uid','age','city','city_rank','career','gender','residence','communication_avgonline_30d','consume_purchase','membership_life_duration','up_membership_grade','up_life_duration']
ad_col = ['task_id','adv_id','creat_type_cd','adv_prim_id','dev_id','slot_id','spread_app_id','tags','app_first_class','app_second_class','indu_name','inter_type_cd']
phone_col = ['device_name','device_size','net_type','emui_dev','device_price']
app_col = ['his_app_size','his_on_shelf_time','app_score','list_time']
train_tif_uid1,test_tif_uid1,train_tfidf_agmax,train_tfidf_max,train_tfidf_mean,train_tfidf_std,test_tfidf_agmax,test_tfidf_max,test_tfidf_mean,test_tfidf_std = get_tfidf(train_df , test_df, 'uid','task_id')
train_tif_uid1 = train_tif_uid1.drop('task_id',axis=1)
train_tif_uid1['uid'+'task_id'+'tf_argmax'] = train_tfidf_agmax
train_tif_uid1['uid'+'task_id'+'max'] = train_tfidf_max
train_tif_uid1['uid'+'task_id'+'mean'] = train_tfidf_mean
train_tif_uid1['uid'+'task_id'+'std'] = train_tfidf_std
train_tif_uid2,test_tif,train_tfidf_agmax,train_tfidf_max,train_tfidf_mean,train_tfidf_std,test_tfidf_agmax,test_tfidf_max,test_tfidf_mean,test_tfidf_std = get_tfidf(train_df , test_df, 'uid','adv_id')
train_tif_uid2 = train_tif_uid2.drop('adv_id',axis=1)
train_tif_uid2['uid'+'adv_id'+'tf_argmax'] = train_tfidf_agmax
train_tif_uid2['uid'+'adv_id'+'max'] = train_tfidf_max
train_tif_uid2['uid'+'adv_id'+'mean'] = train_tfidf_mean
train_tif_uid2['uid'+'adv_id'+'std'] = train_tfidf_std
train_tif_uid3,test_tif,train_tfidf_agmax,train_tfidf_max,train_tfidf_mean,train_tfidf_std,test_tfidf_agmax,test_tfidf_max,test_tfidf_mean,test_tfidf_std = get_tfidf(train_df , test_df, 'uid','slot_id')
train_tif_uid3 = train_tif_uid3.drop('slot_id',axis=1)
train_tif_uid3['uid'+'slot_id'+'tf_argmax'] = train_tfidf_agmax
train_tif_uid3['uid'+'slot_id'+'max'] = train_tfidf_max
train_tif_uid3['uid'+'slot_id'+'mean'] = train_tfidf_mean
train_tif_uid3['uid'+'slot_id'+'std'] = train_tfidf_std
train_tif_uid4,test_tif,train_tfidf_agmax,train_tfidf_max,train_tfidf_mean,train_tfidf_std,test_tfidf_agmax,test_tfidf_max,test_tfidf_mean,test_tfidf_std = get_tfidf(train_df , test_df, 'uid','adv_prim_id')
train_tif_uid4 = train_tif_uid4.drop('adv_prim_id',axis=1)
train_tif_uid4['uid'+'adv_prim_id'+'tf_argmax'] = train_tfidf_agmax
train_tif_uid4['uid'+'adv_prim_id'+'max'] = train_tfidf_max
train_tif_uid4['uid'+'adv_prim_id'+'mean'] = train_tfidf_mean
train_tif_uid4['uid'+'adv_prim_id'+'std'] = train_tfidf_std
def add_noise(series, noise_level):
return series * (1 + noise_level * np.random.randn(len(series)))
def target_encode(trn_series=None,
tst_series1=None,
tst_series2 = None,
target=None,
min_samples_leaf=1,
smoothing=1,
noise_level=0):
"""
Smoothing is computed like in the following paper by Daniele Micci-Barreca
https://kaggle2.blob.core.windows.net/forum-message-attachments/225952/7441/high%20cardinality%20categoricals.pdf
trn_series : training categorical feature as a pd.Series
tst_series : test categorical feature as a pd.Series
target : target data as a pd.Series
min_samples_leaf (int) : minimum samples to take category average into account
smoothing (int) : smoothing effect to balance categorical average vs prior
"""
assert len(trn_series) == len(target)
assert trn_series.name == tst_series1.name
assert trn_series.name == tst_series2.name
nui = trn_series.nunique()
cou = len(trn_series)
min_samples_leaf = min_samples_leaf*(cou/nui)
print(min_samples_leaf)
temp = pd.concat([trn_series, target], axis=1)
# Compute target mean
averages = temp.groupby(by=trn_series.name)[target.name].agg(["mean", "count"])
# Compute smoothing
smoothing = 1 / (1 + np.exp(-(averages["count"] - min_samples_leaf) / smoothing))
# Apply average function to all target data
prior = target.mean()
# The bigger the count the less full_avg is taken into account
averages[target.name] = prior * (1 - smoothing) + averages["mean"] * smoothing
averages.drop(["mean", "count"], axis=1, inplace=True)
# Apply averages to trn and tst series
ft_trn_series = pd.merge(
trn_series.to_frame(trn_series.name),
averages.reset_index().rename(columns={'index': target.name, target.name: 'average'}),
on=trn_series.name,
how='left')['average'].rename(trn_series.name + '_mean').fillna(prior)
# pd.merge does not keep the index so restore it
ft_trn_series.index = trn_series.index
ft_tst_series1 = pd.merge(
tst_series1.to_frame(tst_series1.name),
averages.reset_index().rename(columns={'index': target.name, target.name: 'average'}),
on=tst_series1.name,
how='left')['average'].rename(trn_series.name + '_mean').fillna(prior)
ft_tst_series2 = pd.merge(
tst_series2.to_frame(tst_series2.name),
averages.reset_index().rename(columns={'index': target.name, target.name: 'average'}),
on=tst_series2.name,
how='left')['average'].rename(trn_series.name + '_mean').fillna(prior)
# pd.merge does not keep the index so restore it
ft_tst_series1.index = tst_series1.index
ft_tst_series2.index = tst_series2.index
return add_noise(ft_trn_series, noise_level).values, add_noise(ft_tst_series1, noise_level).values,add_noise(ft_tst_series2, noise_level).values
floder = StratifiedKFold(n_splits=5,random_state=42,shuffle=True)
test_df2 = test_df.copy()
test_df3 = test_df.copy()
test_df4 = test_df.copy()
test_df5 = test_df.copy()
test_df_list = [test_df , test_df2, test_df3, test_df4, test_df5]
train_df.reset_index(drop=True,inplace=True)
for col in tqdm_notebook(cat_cols):
i = 1
train_df[col + 'tar_enco'] = 0
train_df['K'] = 0
for k ,(tr_idx, oof_idx) in enumerate(StratifiedKFold(n_splits=5, random_state=2020, shuffle=True).split(train_df, train_df['label'])):
print('fold{}'.format(i))
i+=1
trn_series = train_df.iloc[tr_idx][col]
tst_series1 = train_df.iloc[oof_idx][col]
tst_series2 = test_df_list[k][col]
target = train_df.iloc[tr_idx].label
train_targetencoding,oof_targetencoding,test_targetencoding = target_encode(trn_series,
tst_series1,
tst_series2,
target,
min_samples_leaf=0.2,
smoothing=1,
noise_level=0.0001)
train_df.loc[oof_idx,col + 'tar_enco'] = oof_targetencoding
train_df.loc[oof_idx,'K'] = k
test_df_list[k][col + 'tar_enco'] = test_targetencoding
train_df = reduce_mem(train_df)
gc.collect()
train_df = reduce_mem(train_df)
test_df = test_df_list[0]
train_df
train_df = lower_sample_data_by_sample(train_df , 3,303).reset_index(drop=True)
train_df = train_df.merge(train_tif_uid1,on='uid',how='left')
train_df = train_df.merge(train_tif_uid2,on='uid',how='left')
train_df = train_df.merge(train_tif_uid3,on='uid',how='left')
train_df = train_df.merge(train_tif_uid4,on='uid',how='left')
# train_df = train_df.merge(train_tif_taskid1,on='uid',how='left')
# train_df = train_df.merge(train_tif_taskid2,on='uid',how='left')
# train_df = train_df.merge(train_tif_advid1,on='uid',how='left')
# train_df = train_df.merge(train_tif_advid1,on='uid',how='left')
test_df = test_df.merge(train_tif_uid1,on='uid',how='left')
test_df = test_df.merge(train_tif_uid2,on='uid',how='left')
test_df = test_df.merge(train_tif_uid3,on='uid',how='left')
test_df = test_df.merge(train_tif_uid4,on='uid',how='left')
# test_df = test_df.merge(train_tif_taskid1,on='uid',how='left')
# test_df = test_df.merge(train_tif_taskid2,on='uid',how='left')
# test_df = test_df.merge(train_tif_advid1,on='uid',how='left')
# test_df = test_df.merge(train_tif_advid1,on='uid',how='left')
for i in range(5):
test_df_list[i] = test_df_list[i].merge(train_tif_uid1,on='uid',how='left')
test_df_list[i] = test_df_list[i].merge(train_tif_uid2,on='uid',how='left')
test_df_list[i] = test_df_list[i].merge(train_tif_uid3,on='uid',how='left')
test_df_list[i] = test_df_list[i].merge(train_tif_uid4,on='uid',how='left')
# test_df_list[i] = test_df_list[i].merge(train_tif_taskid1,on='uid',how='left')
# test_df_list[i] = test_df_list[i].merge(train_tif_taskid2,on='uid',how='left')
# test_df_list[i] = test_df_list[i].merge(train_tif_advid1,on='uid',how='left')
# test_df_list[i] = test_df_list[i].merge(train_tif_advid2,on='uid',how='left')
test_df
cl = CountEncoder()
for col in tqdm_notebook(cat_cols):
cl = CountEncoder(cols=col)
cl.fit(train_df[col])
train_df[col + '_count'] = (cl.transform(train_df[col])).values
test_df_list[0] = test_df_list[0].join(cl.transform(test_df[col]).add_suffix('_count'))
test_df_list[1] = test_df_list[1].join(cl.transform(test_df[col]).add_suffix('_count'))
test_df_list[2] = test_df_list[2].join(cl.transform(test_df[col]).add_suffix('_count'))
test_df_list[3] = test_df_list[3].join(cl.transform(test_df[col]).add_suffix('_count'))
test_df_list[4] = test_df_list[4].join(cl.transform(test_df[col]).add_suffix('_count'))
cat_cols = cat_cols+['user_kme','ad_kme','uidtask_idtf_argmax']
dense_feature = [col for col in train_df.columns if col not in drop_cols+cat_cols]
train_df = reduce_mem(train_df)
for i in range(5):
test_df_list[i].drop(['communication_onlinerate'],axis=1,inplace=True)
test_df_list[i].fillna(0,inplace=True)
test_df_list[i]['K'] = i
feature = cat_cols+dense_feature
estimator_ad= KMeans(n_clusters=500, random_state=42)
estimator_user= KMeans(n_clusters=500, random_state=42)
user_col = ['age','city','city_rank','career','gender','residence']
ad_col = ['task_id','adv_id','creat_type_cd','adv_prim_id','dev_id','slot_id','spread_app_id','tags','app_first_class','app_second_class','indu_name','inter_type_cd']
#读取Model
with open('estimator.pickle', 'rb') as f:
estimator_user = pickle.load(f)
#测试读取后的Model
#读取Model
with open('estimator_ad.pickle', 'rb') as f:
estimator_ad = pickle.load(f)
#测试读取后的Model
ad_features = []
for col in train_df.columns:
for c in ad_col:
if c+'tar_enco' in col:
ad_features.append(col)
user_features = []
for col in train_df.columns:
for c in user_col:
if c+'tar_enco' in col:
user_features.append(col)
ad_pred =estimator_ad.predict(train_df[ad_features])
train_df['ad_kme'] = ad_pred
for i,t in enumerate(test_df_list):
test_df_list[i]['ad_kme'] = estimator_ad.predict(t[ad_features])
user_pred =estimator_user.predict(train_df[user_features])
train_df['user_kme'] = user_pred
for i,t in enumerate(test_df_list):
test_df_list[i]['user_kme'] = estimator_user.predict(t[user_features])
import pickle
with open('estimator_ad.pickle', 'wb') as f:
pickle.dump(estimator_ad, f)
test_df['user_kme'] = estimator_user.predict(test_df[user_features])
test_df['ad_kme'] = estimator_ad.predict(test_df[ad_features])
test_df
for col in tqdm_notebook(['user_kme','ad_kme','uidtask_idtf_argmax','uidadv_idtf_argmax','uidslot_idtf_argmax','uidadv_prim_idtf_argmax']):
cl = CountEncoder(cols=col)
cl.fit(train_df[col])
train_df[col + '_count'] = (cl.transform(train_df[col])).values
test_df_list[0] = test_df_list[0].join(cl.transform(test_df[col]).add_suffix('_count'))
test_df_list[1] = test_df_list[1].join(cl.transform(test_df[col]).add_suffix('_count'))
test_df_list[2] = test_df_list[2].join(cl.transform(test_df[col]).add_suffix('_count'))
test_df_list[3] = test_df_list[3].join(cl.transform(test_df[col]).add_suffix('_count'))
test_df_list[4] = test_df_list[4].join(cl.transform(test_df[col]).add_suffix('_count'))
for col in tqdm_notebook(['user_kme','ad_kme','uidtask_idtf_argmax','uidadv_idtf_argmax','uidslot_idtf_argmax','uidadv_prim_idtf_argmax']):
i = 1
train_df[col + 'tar_enco'] = 0
train_df['K'] = 0
for k ,(tr_idx, oof_idx) in enumerate(StratifiedKFold(n_splits=5, random_state=2020, shuffle=True).split(train_df, train_df['label'])):
print('fold{}'.format(i))
i+=1
trn_series = train_df.iloc[tr_idx][col]
tst_series1 = train_df.iloc[oof_idx][col]
tst_series2 = test_df_list[k][col]
target = train_df.iloc[tr_idx].label
train_targetencoding,oof_targetencoding,test_targetencoding = target_encode(trn_series,
tst_series1,
tst_series2,
target,
min_samples_leaf=0.2,
smoothing=1,
noise_level=0.0001)
train_df.loc[oof_idx,col + 'tar_enco'] = oof_targetencoding
train_df.loc[oof_idx,'K'] = k
test_df_list[k][col + 'tar_enco'] = test_targetencoding
train_df = reduce_mem(train_df)
gc.collect()
seed=1080
is_shuffle=True
user_col = ['uid','age','city','city_rank','career','gender','residence','communication_avgonline_30d','consume_purchase','membership_life_duration','up_membership_grade','up_life_duration']
ad_col = ['task_id','adv_id','creat_type_cd','adv_prim_id','dev_id','slot_id','spread_app_id','tags','app_first_class','app_second_class','indu_name','inter_type_cd']
phone_col = ['device_name','device_size','net_type','emui_dev','device_price']
app_col = ['his_app_size','his_on_shelf_time','app_score','list_time']
train_df
#--------------------------------------------------模型训练----------------------------------------#
from sklearn.ensemble import RandomForestClassifier
for k in tqdm_notebook(range(5)):
t = train_df[train_df.K!=k].reset_index(drop=True)[user_col]
t_label = train_df[train_df.K!=k].reset_index(drop=True).label.values
v = train_df[train_df.K==k].reset_index(drop=True)[user_col]
v_label = train_df[train_df.K==k].reset_index(drop=True).label.values
RF_user = RandomForestClassifier(n_estimators=10, criterion='gini',n_jobs=-1, random_state=42, verbose=1)
RF_user.fit(t,t_label)
train_df.loc[train_df[train_df.K==k].index,'rf_user'] = RF_user.predict_proba(v)[:,1]
test_df_list[k]['rf_user'] = RF_user.predict_proba(test_df_list[k][user_col])[:,1]
for k in tqdm_notebook(range(5)):
t = train_df[train_df.K!=k].reset_index(drop=True)[ad_col]
t_label = train_df[train_df.K!=k].reset_index(drop=True).label.values
v = train_df[train_df.K==k].reset_index(drop=True)[ad_col]
v_label = train_df[train_df.K==k].reset_index(drop=True).label.values
RF_ad = RandomForestClassifier(n_estimators=10, criterion='gini',n_jobs=-1, random_state=42, verbose=1)
RF_ad.fit(t,t_label)
train_df.loc[train_df[train_df.K==k].index,'rf_ad'] = RF_ad.predict_proba(v)[:,1]
test_df_list[k]['rf_ad'] = RF_ad.predict_proba(test_df_list[k][ad_col])[:,1]
for k in tqdm_notebook(range(5)):
t = train_df[train_df.K!=k].reset_index(drop=True)[phone_col]
t_label = train_df[train_df.K!=k].reset_index(drop=True).label.values
v = train_df[train_df.K==k].reset_index(drop=True)[phone_col]
v_label = train_df[train_df.K==k].reset_index(drop=True).label.values
RF_phone = RandomForestClassifier(n_estimators=10, criterion='gini',n_jobs=-1, random_state=42, verbose=1)
RF_phone.fit(t,t_label)
train_df.loc[train_df[train_df.K==k].index,'rf_phone'] = RF_phone.predict_proba(v)[:,1]
test_df_list[k]['rf_phone'] = RF_phone.predict_proba(test_df_list[k][phone_col])[:,1]
for k in tqdm_notebook(range(5)):
t = train_df[train_df.K!=k].reset_index(drop=True)[app_col]
t_label = train_df[train_df.K!=k].reset_index(drop=True).label.values
v = train_df[train_df.K==k].reset_index(drop=True)[app_col]
v_label = train_df[train_df.K==k].reset_index(drop=True).label.values
RF_app = RandomForestClassifier(n_estimators=10, criterion='gini',n_jobs=-1, random_state=42, verbose=1)
RF_app.fit(t,t_label)
train_df.loc[train_df[train_df.K==k].index,'rf_app'] = RF_app.predict_proba(v)[:,1]
test_df_list[k]['rf_app'] = RF_app.predict_proba(test_df_list[k][app_col])[:,1]
print(sklearn.metrics.roc_auc_score(train_df.label,train_df.rf_user))
print(sklearn.metrics.roc_auc_score(train_df.label,train_df.rf_ad))
print(sklearn.metrics.roc_auc_score(train_df.label,train_df.rf_phone))
print(sklearn.metrics.roc_auc_score(train_df.label,train_df.rf_app))
cat_cols
dense_feature = [col for col in train_df.columns if col not in drop_cols+cat_cols]
feature = cat_cols+dense_feature
feature_importance_df = pd.DataFrame()
predicts = np.zeros(len(train_df))
pred = np.zeros(len(test_df_list[0]))
true = np.zeros(len(train_df))
begin = 0
for fold,k in enumerate(range(5)):
#train = train_df[train_df.K!=k].reset_index(drop=True)[feature]
t_label = train_df[train_df.K!=k].reset_index(drop=True).label
#valid = train_df[train_df.K==k].reset_index(drop=True)[feature]
te_label = train_df[train_df.K==k].reset_index(drop=True).label
clf = cbt.CatBoostClassifier(iterations = 150, learning_rate = 0.3, depth =7, one_hot_max_size=5,use_best_model =True,
loss_function = 'Logloss', eval_metric= "AUC",logging_level='Verbose',task_type='GPU',
cat_features=cat_cols,)#counter_calc_method='Full',l2_leaf_reg = 10,)
clf.fit(train_df[train_df.K!=k].reset_index(drop=True)[feature],t_label.astype('int32'),
eval_set=(train_df[train_df.K==k].reset_index(drop=True)[feature], te_label.astype('int32'))
,plot=True,verbose=1,cat_features=cat_cols)
# predicts[begin:over] = clf.predict_proba(train_df[train_df.K==k].reset_index(drop=True)[feature])[:,1]
# true[begin:over] = te_label.values
pred += (clf.predict_proba(test_df_list[fold][feature])[:,1])
# begin+=len(train_df[train_df.K==k].reset_index(drop=True)[feature])
gc.collect()
print('--------------------')
#print(sklearn.metrics.roc_auc_score(true,predicts))
feature_importance_df = pd.DataFrame()
feature_importance_df["importance"] = clf.feature_importances_
feature_importance_df["feature"] = feature
feature_importance_df.sort_values('importance',ascending=False)
#------------------------------模型预测----------------------------------------#
pred = pred/5
0.8320243359
(pred>0.5).sum()
pred
((pred-np.min(pred))/(np.max(pred)-np.min(pred)))
res = pd.DataFrame()
res['id'] = test_df_list[0]['id'].astype('int32')
res['probability'] = pred
res.to_csv('5cv_catboost_baseline_target_encoding_.csv',index = False)
res
```
|
github_jupyter
|
# Codebuster STAT 535 Statistical Computing Project
## Movie recommendation recommendation pipeline
##### Patrick's comments 11/9
- Goal: Build a small real world deployment pipeline like it can be used in netflix / amazon
- build / test with movie recommendation data set (model fitting, data preprocessing, evaluation)
- Show that it also works with another dataset like product recommendation
- Find data on UCI repo, kaggle, google search
- Use scikit learn estimation: https://github.com/scikit-learn-contrib/project-template
## Literature
- https://users.ece.cmu.edu/~dbatra/publications/assets/goel_batra_netflix.pdf
- http://delivery.acm.org/10.1145/1460000/1454012/p11-park.pdf?ip=72.19.68.210&id=1454012&acc=ACTIVE%20SERVICE&key=73B3886B1AEFC4BB%2EB478147E31829731%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1543416754_7f92e0642e26e7ea732886879096c704
- https://www.kaggle.com/prajitdatta/movielens-100k-dataset/kernels
- https://medium.com/@james_aka_yale/the-4-recommendation-engines-that-can-predict-your-movie-tastes-bbec857b8223
- https://www.kaggle.com/c/predict-movie-ratings
- https://cseweb.ucsd.edu/classes/wi17/cse258-a/reports/a048.pdf
- https://github.com/neilsummers/predict_movie_ratings/blob/master/movieratings.py
- https://medium.com/@connectwithghosh/recommender-system-on-the-movielens-using-an-autoencoder-using-tensorflow-in-python-f13d3e8d600d
### A few more
- https://sci2s.ugr.es/keel/pdf/specific/congreso/xia_dong_06.pdf (Uses SMV for classification, then MF for recommendation)
- https://www.kaggle.com/rounakbanik/movie-recommender-systems (Employs at least three Modules for recommendation)
- http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.703.4954&rep=rep1&type=pdf (Close to what we need, but a little too involving)
- https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0165868 (Uses SVM and correlation matrices...I have already tried the correlation approach, looks quite good, but how to quantify accuracy?)
- https://www.quora.com/How-do-we-use-SVMs-in-a-collaborative-recommendation (A good thread on SVM)
-http://www.quuxlabs.com/blog/2010/09/matrix-factorization-a-simple-tutorial-and-implementation-in-python/ (A good tutorial on matrix factorizasion)
## Approach
##### User profile cases:
- ##### Case 0 rated movies: Supervised prediction with just user age, gender, and year of the movie
In case of cold-start: No user information available
- ##### Case < 20 rated movies: Content-based recommender system
Content-based recommendation information about users and their taste. As we can see in the preprocessing, most of the users only rated one to five movies, implying that we have incomplete user-profiles. I think content-based recommendation makes sense here, because we can recommend similar movies, but not other categories that a user might like because we can not identify similar users with an incomplete user profile.
- ##### Case >=20 rated movies: Collaborative recommender system
Collaborative filtering makes sense if you have a good user profile, which we assume we have if a user rated more or equal than 10 movies. With a good user profile we can identify similar users and make more sophisticated recommendations, e.g. movies from other genres.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from scipy import interp
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import linear_kernel
from sklearn.model_selection import cross_val_predict, cross_val_score, cross_validate, StratifiedKFold
from sklearn.metrics import classification_report,confusion_matrix, roc_curve, auc
```
## Data Understanding and Preprocessing
Get a feeling for the dataset, its problems and conduct helpful preprocessing
```
df = pd.read_csv("allData.tsv", sep='\t')
print(f"Shape: {df.shape}")
print(df.dtypes)
df.head()
df.age.value_counts()
```
##### Histogram: Number of movies are rated by users
Most users rated only up tp 5 movies
```
df.userID.value_counts().hist(bins=50)
```
##### Divide datasets for different recommendations (random forest, content based, collaborative based)
It can be useful to use content-based recommender systems for those users
```
df_split = df.copy()
df_split.set_index('userID', inplace=True)
# set for content based recommendation with #ratings < 10
df_content = df_split[df.userID.value_counts()<10]
# set for collaborative recommendation with #ratings >= 10
df_collaborative = df_split[df.userID.value_counts()>=10]
df_content.index.value_counts().hist(bins=50)
df_collaborative.index.value_counts().hist(bins=50)
```
##### Transform numerical rating to binary
- 1, if user rates movie 4 or 5
- 0, if user rates movie less than 4
```
df['rating'].mask(df['rating'] < 4, 0, inplace=True)
df['rating'].mask(df['rating'] > 3, 1, inplace=True)
```
##### Check rating distribution
```
df['rating'].hist()
```
## Recommendation
##### Cold start: Gradient Boosting Classifier
Logic: Treat every user as same, predicting rating over whole movie dataset
- ##### Case 0 rated movies: Supervised prediction with just user age, gender, and year of the movie
In case of cold-start: No user information available
```
# Cross Validation to test and anticipate overfitting problem
def crossvalidate(clf, X,y):
'''
Calculate precision, recall, and roc_auc for a 10-fold cross validation run with passed classifier
'''
scores1 = cross_val_score(clf, X, y, cv=10, scoring='precision')
scores2 = cross_val_score(clf, X, y, cv=10, scoring='recall')
scores3 = cross_val_score(clf, X, y, cv=10, scoring='roc_auc')
# The mean score and standard deviation of the score estimate
print("Cross Validation Precision: %0.2f (+/- %0.2f)" % (scores1.mean(), scores1.std()))
print("Cross Validation Recall: %0.2f (+/- %0.2f)" % (scores2.mean(), scores2.std()))
print("Cross Validation roc_auc: %0.2f (+/- %0.2f)" % (scores3.mean(), scores3.std()))
# Run classifier with cross-validation and plot ROC curves
# from http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc_crossval.html
def get_crossval_roc(clfname, classifier,X,y):
'''
Run classifier with cross-validation and plot ROC curves
'''
n_samples, n_features = X.shape
cv = StratifiedKFold(n_splits=6)
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
i = 0
for train, test in cv.split(X, y):
probas_ = classifier.fit(X[train], y[train]).predict_proba(X[test])
# Compute ROC curve and area the curve
fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])
tprs.append(interp(mean_fpr, fpr, tpr))
tprs[-1][0] = 0.0
roc_auc = auc(fpr, tpr)
aucs.append(roc_auc)
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc))
i += 1
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
plt.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
return
```
##### Preprocessing for boosted random forest classifier
```
# User information before any movie ratings
X = df[['age', 'gender', 'year', 'genre1', 'genre2', 'genre3']]
y = df['rating'].as_matrix()
# Preprocessing
# One hot encoding
dummyvars = pd.get_dummies(X[['gender', 'genre1', 'genre2', 'genre3']])
# append the dummy variables to df
X = pd.concat([X[['age', 'year']], dummyvars], axis = 1).as_matrix()
print("GradientBoostingClassifier")
gbclf = GradientBoostingClassifier(n_estimators=100)
gbclf.fit(X=X, y=y)
gbclf.predict()
#crossvalidate(gbclf,X,y)
#get_crossval_roc("gbclf",gbclf,X,y)
```
##### Content-based recommendation with tf-idf for user with <10 ratings
- ##### Case < 10 rated movies: Content-based recommender system
Content-based recommendation information about users and their taste. As we can see in the preprocessing, most of the users only rated one to five movies, implying that we have incomplete user-profiles. I think content-based recommendation makes sense here, because we can recommend similar movies, but not other categories that a user might like because we can not identify similar users with an incomplete user profile.
- Code inspired by: https://medium.com/@james_aka_yale/the-4-recommendation-engines-that-can-predict-your-movie-tastes-bbec857b8223
- Make recommendations based on similarity of movie genres, purely content based.
```
# import movies
movies = pd.read_csv("movies.tsv", sep='\t')
print(f"Shape: {df.shape}")
movies.head()
# Preprocessing
# Strip space at the end of string
movies['name'] = movies['name'].str.rstrip()
# Concat genres into one string
movies['genres_concat'] = movies[['genre1', 'genre2', 'genre3']].astype(str).apply(' '.join, axis=1)
# Remove nans in string and strip spaces at the end
movies['genres_concat'] = movies['genres_concat'].str.replace('nan','').str.rstrip()
movies.head()
# Function that get movie recommendations based on the cosine similarity score of movie genres
def content_based_recommendation(movies, name, number_recommendations):
'''
Recommends number of similar movie based on movie title and similarity to movies in movie database
@param movies: pandas dataframe with movie dataset with columns (movieID, name, genres_concat)
@param name: movie title as string
@param number_recommendations: number of recommendations returned as integer
'''
# Create tf_idf matrix sklearn TfidfVectorizer
tf = TfidfVectorizer(analyzer='word',ngram_range=(1, 2),min_df=0, stop_words='english')
tfidf_matrix = tf.fit_transform(movies['genres_concat'])
# calculate similarity matrix with cosine distance of tf_idf values
cosine_sim = linear_kernel(tfidf_matrix, tfidf_matrix)
# Build a 1-dimensional array with movie titles
indices = pd.Series(movies.index, index=movies['name'])
# Ranks movies according to similarity to requested movie
idx = indices[name]
sim_scores = list(enumerate(cosine_sim[idx]))
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
sim_scores = sim_scores[1:(number_recommendations+1)]
movie_indices = [i[0] for i in sim_scores]
return movies.name.iloc[movie_indices]
```
##### Test recommendations
```
content_based_recommendation(movies, 'Father of the Bride Part II', 5)
```
## Evaluation
## Ceate predictions for predict.csv
|
github_jupyter
|
```
from glob import glob
from os import path
import re
from skbio import DistanceMatrix
import pandas as pd
import numpy as np
from kwipexpt import *
%matplotlib inline
%load_ext rpy2.ipython
%%R
library(tidyr)
library(dplyr, warn.conflicts=F, quietly=T)
library(ggplot2)
```
Calculate performance of kWIP
=============================
The next bit of python code calculates the performance of kWIP against the distance between samples calulcated from the alignments of their genomes.
This code caluclates spearman's $\rho$ between the off-diagonal elements of the triagnular distance matrices.
```
expts = list(map(lambda fp: path.basename(fp.rstrip('/')), glob('data/*/')))
print("Expts:", *expts[:10], "...")
def process_expt(expt):
expt_results = []
def extract_info(filename):
return re.search(r'kwip/(\d\.?\d*)x-(0\.\d+)-(wip|ip).dist', filename).groups()
# dict of scale: distance matrix, populated as we go
truths = {}
for distfile in glob("data/{}/kwip/*.dist".format(expt)):
cov, scale, metric = extract_info(distfile)
if scale not in truths:
genome_dist_path = 'data/{ex}/all_genomes-{sc}.dist'.format(ex=expt, sc=scale)
truths[scale] = load_sample_matrix_to_runs(genome_dist_path)
exptmat = DistanceMatrix.read(distfile)
rho = spearmans_rho_distmats(exptmat, truths[scale])
expt_results.append({
"coverage": cov,
"scale": scale,
"metric": metric,
"rho": rho,
"seed": expt,
})
return expt_results
#process_expt('3662')
results = []
for res in map(process_expt, expts):
results.extend(res)
results = pd.DataFrame(results)
```
Statistical analysis
====================
Is done is R, as that's easier.
Below we see a summary and structure of the data
```
%%R -i results
results$coverage = as.numeric(as.character(results$coverage))
results$scale = as.numeric(as.character(results$scale))
print(summary(results))
str(results)
```
### Experiment design
Below we see the design of the experiment in terms of the two major variables.
We have a series (vertically) that, at 30x coverage, looks at the effect of genetic variation on performance. There is a second series that examines the effect of coverage at an average pairwise genetic distance of 0.001.
There are 100 replicates for each data point, performed as a separate bootstrap across the random creation of the tree and sampling of reads etc.
```
%%R
ggplot(results, aes(x=coverage, y=scale)) +
geom_point() +
scale_x_log10() +
scale_y_log10() +
theme_bw()
```
Effect of Coverage
------------------
Here we show the spread of data across the 100 reps as boxplots per metric and covreage level.
I note that the weighted product seems slightly more variable, particularly at higher coverage. Though the median is nearly always higher
```
%%R
dat = results %>%
filter(scale==0.001, coverage<=30) %>%
select(rho, metric, coverage)
dat$coverage = as.factor(dat$coverage)
ggplot(dat, aes(x=coverage, y=rho, fill=metric)) +
geom_boxplot(aes(fill=metric))
%%R
# AND AGAIN WITHOUT SUBSETTING
dat = results %>%
filter(scale==0.001) %>%
select(rho, metric, coverage)
dat$coverage = as.factor(dat$coverage)
ggplot(dat, aes(x=coverage, y=rho, fill=metric)) +
geom_boxplot(aes(fill=metric))
%%R
dat = subset(results, scale==0.001 & coverage <=15, select=-scale)
ggplot(dat, aes(x=coverage, y=rho, colour=seed, linetype=metric)) +
geom_line()
%%R
summ = results %>%
filter(scale==0.001, coverage <= 50) %>%
select(-scale) %>%
group_by(coverage, metric) %>%
summarise(rho_av=mean(rho), rho_err=sd(rho))
ggplot(summ, aes(x=coverage, y=rho_av, ymin=rho_av-rho_err, ymax=rho_av+rho_err, group=metric)) +
geom_line(aes(linetype=metric)) +
geom_ribbon(aes(fill=metric), alpha=0.2) +
xlab('Genome Coverage') +
ylab(expression(paste("Spearman's ", rho, " +- SD"))) +
scale_x_log10()+
ggtitle("Performance of WIP & IP") +
theme_bw()
%%R
sem <- function(x) sqrt(var(x,na.rm=TRUE)/length(na.omit(x)))
summ = results %>%
filter(scale==0.001) %>%
select(-scale) %>%
group_by(coverage, metric) %>%
summarise(rho_av=mean(rho), rho_err=sem(rho))
ggplot(summ, aes(x=coverage, y=rho_av, ymin=rho_av-rho_err, ymax=rho_av+rho_err, group=metric)) +
geom_line(aes(linetype=metric)) +
geom_ribbon(aes(fill=metric), alpha=0.2) +
xlab('Genome Coverage') +
ylab(expression(paste("Spearman's ", rho))) +
scale_x_log10()+
theme_bw()
%%R
cov_diff = results %>%
filter(scale==0.001) %>%
select(rho, metric, coverage, seed) %>%
spread(metric, rho) %>%
mutate(diff=wip-ip) %>%
select(coverage, seed, diff)
print(summary(cov_diff))
p = ggplot(cov_diff, aes(x=coverage, y=diff, colour=seed)) +
geom_line() +
scale_x_log10() +
ggtitle("Per expt difference in performance (wip - ip)")
print(p)
summ = cov_diff %>%
group_by(coverage) %>%
summarise(diff_av=mean(diff), diff_sd=sd(diff))
ggplot(summ, aes(x=coverage, y=diff_av, ymin=diff_av-diff_sd, ymax=diff_av+diff_sd)) +
geom_line() +
geom_ribbon(alpha=0.2) +
xlab('Genome Coverage') +
ylab(expression(paste("Improvment in Spearman's ", rho, " (wip - IP)"))) +
scale_x_log10() +
theme_bw()
%%R
var = results %>%
filter(coverage == 30) %>%
select(-coverage)
var$scale = as.factor(var$scale)
ggplot(var, aes(x=scale, y=rho, fill=metric)) +
geom_boxplot() +
xlab('Mean pairwise variation') +
ylab(expression(paste("Spearman's ", rho))) +
#scale_x_log10()+
theme_bw()
%%R
summ = results %>%
filter(coverage == 30) %>%
select(-coverage) %>%
group_by(scale, metric) %>%
summarise(rho_av=mean(rho), rho_sd=sd(rho))
ggplot(summ, aes(x=scale, y=rho_av, ymin=rho_av-rho_sd, ymax=rho_av+rho_sd, group=metric)) +
geom_line(aes(linetype=metric)) +
geom_ribbon(aes(fill=metric), alpha=0.2) +
xlab('Mean pairwise variation') +
ylab(expression(paste("Spearman's ", rho))) +
scale_x_log10()+
theme_bw()
```
|
github_jupyter
|
# Example from Image Processing
```
%matplotlib inline
import matplotlib.pyplot as plt
```
Here we'll take a look at a simple facial recognition example.
This uses a dataset available within scikit-learn consisting of a
subset of the [Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/)
data. Note that this is a relatively large download (~200MB) so it may
take a while to execute.
```
from sklearn import datasets
lfw_people = datasets.fetch_lfw_people(min_faces_per_person=70, resize=0.4,
data_home='datasets')
lfw_people.data.shape
```
If you're on a unix-based system such as linux or Mac OSX, these shell commands
can be used to see the downloaded dataset:
```
!ls datasets
!du -sh datasets/lfw_home
```
Once again, let's visualize these faces to see what we're working with:
```
fig = plt.figure(figsize=(8, 6))
# plot several images
for i in range(15):
ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[])
ax.imshow(lfw_people.images[i], cmap=plt.cm.bone)
import numpy as np
plt.figure(figsize=(10, 2))
unique_targets = np.unique(lfw_people.target)
counts = [(lfw_people.target == i).sum() for i in unique_targets]
plt.xticks(unique_targets, lfw_people.target_names[unique_targets])
locs, labels = plt.xticks()
plt.setp(labels, rotation=45, size=14)
_ = plt.bar(unique_targets, counts)
```
One thing to note is that these faces have already been localized and scaled
to a common size. This is an important preprocessing piece for facial
recognition, and is a process that can require a large collection of training
data. This can be done in scikit-learn, but the challenge is gathering a
sufficient amount of training data for the algorithm to work
Fortunately, this piece is common enough that it has been done. One good
resource is [OpenCV](http://opencv.willowgarage.com/wiki/FaceRecognition), the
*Open Computer Vision Library*.
We'll perform a Support Vector classification of the images. We'll
do a typical train-test split on the images to make this happen:
```
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
lfw_people.data, lfw_people.target, random_state=0)
print(X_train.shape, X_test.shape)
```
## Preprocessing: Principal Component Analysis
1850 dimensions is a lot for SVM. We can use PCA to reduce these 1850 features to a manageable
size, while maintaining most of the information in the dataset. Here it is useful to use a variant
of PCA called ``RandomizedPCA``, which is an approximation of PCA that can be much faster for large
datasets. We saw this method in the previous notebook, and will use it again here:
```
from sklearn import decomposition
pca = decomposition.RandomizedPCA(n_components=150, whiten=True,
random_state=1999)
pca.fit(X_train)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print(X_train_pca.shape)
print(X_test_pca.shape)
```
These projected components correspond to factors in a linear combination of
component images such that the combination approaches the original face. In general, PCA can be a powerful technique for preprocessing that can greatly improve classification performance.
## Doing the Learning: Support Vector Machines
Now we'll perform support-vector-machine classification on this reduced dataset:
```
from sklearn import svm
clf = svm.SVC(C=5., gamma=0.001)
clf.fit(X_train_pca, y_train)
```
Finally, we can evaluate how well this classification did. First, we might plot a
few of the test-cases with the labels learned from the training set:
```
fig = plt.figure(figsize=(8, 6))
for i in range(15):
ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[])
ax.imshow(X_test[i].reshape((50, 37)), cmap=plt.cm.bone)
y_pred = clf.predict(X_test_pca[i])[0]
color = 'black' if y_pred == y_test[i] else 'red'
ax.set_title(lfw_people.target_names[y_pred], fontsize='small', color=color)
```
The classifier is correct on an impressive number of images given the simplicity
of its learning model! Using a linear classifier on 150 features derived from
the pixel-level data, the algorithm correctly identifies a large number of the
people in the images.
Again, we can
quantify this effectiveness using ``clf.score``
```
print(clf.score(X_test_pca, y_test))
```
## Final Note
Here we have used PCA "eigenfaces" as a pre-processing step for facial recognition.
The reason we chose this is because PCA is a broadly-applicable technique, which can
be useful for a wide array of data types. For more details on the eigenfaces approach, see the original paper by [Turk and Penland, Eigenfaces for Recognition](http://www.face-rec.org/algorithms/PCA/jcn.pdf). Research in the field of facial recognition has moved much farther beyond this paper, and has shown specific feature extraction methods can be more effective. However, eigenfaces is a canonical example of machine learning "in the wild", and is a simple method with good results.
|
github_jupyter
|
### 1)Which of the following operators is used to calculate remainder in a division?
### ANS-%
```
## 2)
2/3
## 3)_
6<<2
## 4)
6&2
## 5)
6|2
```
### 6)What does the finally keyword denotes in python?
#### ANS-the finally block will be executed no matter if the try block raises an error or not.
### 7)What does raise keyword is used for in python?
### ANS-A) It is used to raise an exception.
### 8)Which of the following is a common use case of yield keyword in python?
### ANS- C) in defining a generator
### 9)Which of the following are the valid variable names?
### A) _abc C) abc2
### 10) Which of the following are the keywords in python?
### Ans-yield and raise.
### 11. Write a python program to find the factorial of a number.
```
## For factors
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n-1)
n=int(input("Input a number to compute the factiorial : "))
print(factorial(n))
```
### 12) Write a python program to find whether a number is prime or composite.
```
num = int(input("Enter any number : "))
if num > 1:
for i in range(2, num):
if (num % i) == 0:
print(num, "is NOT a prime number")
break
else:
print(num, "is a PRIME number")
elif num == 0 or 1:
print(num, "is a neither prime NOR composite number")
else:
print(num, "is NOT a prime number it is a COMPOSITE number")
```
### 13)Write a python program to check whether a given string is palindrome or not.
```
num=int(input("Enter a number:"))
temp=num
rev=0
while(num>0):
dig=num%10
rev=rev*10+dig
num=num//10
if(temp==rev):
print("The number is palindrome!")
else:
print("Not a palindrome!")
## for both number and alphabet which are pallindrome.
string=input(("Enter a string:"))
if(string==string[::-1]):
print("The string is a palindrome")
else:
print("Not a palindrome")
```
### 14)Write a Python program to get the third side of right-angled triangle from two given sides.
```
def pythagoras(opposite_side,adjacent_side,hypotenuse):
if opposite_side == str("x"):
return ("Opposite = " + str(((hypotenuse**2) - (adjacent_side**2))**0.5))
elif adjacent_side == str("x"):
return ("Adjacent = " + str(((hypotenuse**2) - (opposite_side**2))**0.5))
elif hypotenuse == str("x"):
return ("Hypotenuse = " + str(((opposite_side**2) + (adjacent_side**2))**0.5))
else:
return "answer!"
print(pythagoras(6,8,'x'))
print(pythagoras(6,'x',10))
print(pythagoras('x',8,10))
print(pythagoras(6,8,10))
## I dont understand this program mam. I tried
```
### 15) Write a python program to print the frequency of each of the characters present in a given string.
```
def char_frequency(str1):
dict = {}
for n in str1:
keys = dict.keys()
if n in keys:
dict[n] += 1
else:
dict[n] = 1
return dict
print(char_frequency('lord of the ring and games of thrones'))
```
|
github_jupyter
|
#### Putting It All Together
As you might have guessed from the last notebook, using all of the variables was allowing you to drastically overfit the training data. This was great for looking good in terms of your Rsquared on these points. However, this was not great in terms of how well you were able to predict on the test data.
We will start where we left off in the last notebook. First read in the dataset.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import AllTogether as t
import seaborn as sns
%matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
df.head()
```
#### Question 1
**1.** To begin fill in the format function below with the correct variable. Notice each **{ }** holds a space where one of your variables will be added to the string. This will give you something to do while the the function does all the steps you did throughout this lesson.
```
a = 'test_score'
b = 'train_score'
c = 'linear model (lm_model)'
d = 'X_train and y_train'
e = 'X_test'
f = 'y_test'
g = 'train and test data sets'
h = 'overfitting'
q1_piat = '''In order to understand how well our {} fit the dataset,
we first needed to split our data into {}.
Then we were able to fit our {} on the {}.
We could then predict using our {} by providing
the linear model the {} for it to make predictions.
These predictions were for {}.
By looking at the {}, it looked like we were doing awesome because
it was 1! However, looking at the {} suggested our model was not
extending well. The purpose of this notebook will be to see how
well we can get our model to extend to new data.
This problem where our data fits the training data well, but does
not perform well on test data is commonly known as
{}.'''.format(c, g, c, d, c, e, f, b, a, h)
print(q1_piat)
# Print the solution order of the letters in the format
t.q1_piat_answer()
```
#### Question 2
**2.** Now, we need to improve the model . Use the dictionary below to provide the true statements about improving **this model**. **Also consider each statement as a stand alone**. Though, it might be a good idea after other steps, which would you consider a useful **next step**?
```
a = 'yes'
b = 'no'
q2_piat = {'add interactions, quadratics, cubics, and other higher order terms': b,
'fit the model many times with different rows, then average the responses': a,
'subset the features used for fitting the model each time': a,
'this model is hopeless, we should start over': b}
#Check your solution
t.q2_piat_check(q2_piat)
```
##### Question 3
**3.** Before we get too far along, follow the steps in the function below to create the X (explanatory matrix) and y (response vector) to be used in the model. If your solution is correct, you should see a plot similar to the one shown in the Screencast.
```
def clean_data(df):
'''
INPUT
df - pandas dataframe
OUTPUT
X - A matrix holding all of the variables you want to consider when predicting the response
y - the corresponding response vector
This function cleans df using the following steps to produce X and y:
1. Drop all the rows with no salaries
2. Create X as all the columns that are not the Salary column
3. Create y as the Salary column
4. Drop the Salary, Respondent, and the ExpectedSalary columns from X
5. For each numeric variable in X, fill the column with the mean value of the column.
6. Create dummy columns for all the categorical variables in X, drop the original columns
'''
# Drop rows with missing salary values
df = df.dropna(subset=['Salary'], axis=0)
y = df['Salary']
#Drop respondent and expected salary columns
df = df.drop(['Respondent', 'ExpectedSalary', 'Salary'], axis=1)
# Fill numeric columns with the mean
num_vars = df.select_dtypes(include=['float', 'int']).columns
for col in num_vars:
df[col].fillna((df[col].mean()), inplace=True)
# Dummy the categorical variables
cat_vars = df.select_dtypes(include=['object']).copy().columns
for var in cat_vars:
# for each cat add dummy var, drop original column
df = pd.concat([df.drop(var, axis=1), pd.get_dummies(df[var], prefix=var, prefix_sep='_', drop_first=True)], axis=1)
X = df
return X, y
#Use the function to create X and y
X, y = clean_data(df)
#cutoffs here pertains to the number of missing values allowed in the used columns.
#Therefore, lower values for the cutoff provides more predictors in the model.
cutoffs = [5000, 3500, 2500, 1000, 100, 50, 30, 25]
r2_scores_test, r2_scores_train, lm_model, X_train, X_test, y_train, y_test = t.find_optimal_lm_mod(X, y, cutoffs)
```
#### Question 4
**4.** Use the output and above plot to correctly fill in the keys of the **q4_piat** dictionary with the correct variable. Notice that only the optimal model results are given back in the above - they are stored in **lm_model**, **X_train**, **X_test**, **y_train**, and **y_test**. If more than one answer holds, provide a tuple holding all the correct variables in the order of first variable alphabetically to last variable alphabetically.
```
print(X_train.shape[1]) #Number of columns
print(r2_scores_test[np.argmax(r2_scores_test)]) # The model we should implement test_r2
print(r2_scores_train[np.argmax(r2_scores_test)]) # The model we should implement train_r2
a = 'we would likely have a better rsquared for the test data.'
b = 1000
c = 872
d = 0.69
e = 0.82
f = 0.88
g = 0.72
h = 'we would likely have a better rsquared for the training data.'
q4_piat = {'The optimal number of features based on the results is': c,
'The model we should implement in practice has a train rsquared of': e,
'The model we should implement in practice has a test rsquared of': d,
'If we were to allow the number of features to continue to increase': h
}
#Check against your solution
t.q4_piat_check(q4_piat)
```
#### Question 5
**5.** The default penalty on coefficients using linear regression in sklearn is a ridge (also known as an L2) penalty. Because of this penalty, and that all the variables were normalized, we can look at the size of the coefficients in the model as an indication of the impact of each variable on the salary. The larger the coefficient, the larger the expected impact on salary.
Use the space below to take a look at the coefficients. Then use the results to provide the **True** or **False** statements based on the data.
```
def coef_weights(coefficients, X_train):
'''
INPUT:
coefficients - the coefficients of the linear model
X_train - the training data, so the column names can be used
OUTPUT:
coefs_df - a dataframe holding the coefficient, estimate, and abs(estimate)
Provides a dataframe that can be used to understand the most influential coefficients
in a linear model by providing the coefficient estimates along with the name of the
variable attached to the coefficient.
'''
coefs_df = pd.DataFrame()
coefs_df['est_int'] = X_train.columns
coefs_df['coefs'] = lm_model.coef_
coefs_df['abs_coefs'] = np.abs(lm_model.coef_)
coefs_df = coefs_df.sort_values('abs_coefs', ascending=False)
return coefs_df
#Use the function
coef_df = coef_weights(lm_model.coef_, X_train)
#A quick look at the top results
coef_df.head(20)
a = True
b = False
#According to the data...
q5_piat = {'Country appears to be one of the top indicators for salary': a,
'Gender appears to be one of the indicators for salary': b,
'How long an individual has been programming appears to be one of the top indicators for salary': a,
'The longer an individual has been programming the more they are likely to earn': b}
t.q5_piat_check(q5_piat)
```
#### Congrats of some kind
Congrats! Hopefully this was a great review, or an eye opening experience about how to put the steps together for an analysis. List the steps. In the next lesson, you will look at how take this and show it off to others so they can act on it.
|
github_jupyter
|
```
import pandas as pd #pandas does things with matrixes
import numpy as np #used for sorting a matrix
import matplotlib.pyplot as plt #matplotlib is used for plotting data
import matplotlib.ticker as ticker #used for changing tick spacing
import datetime as dt #used for dates
import matplotlib.dates as mdates #used for dates, in a different way
import os #used for changes of directory
import warnings
warnings.filterwarnings("ignore")
from sklearn.preprocessing import MinMaxScaler # It scales the data between 0 and 1
import sys
sys.path.append('../')
from utils import simple_plot, simple_plot_by_date, hit_count
import torch
import torch.nn as nn
from torchvision.transforms import ToTensor
from torch.utils.data.dataloader import DataLoader
import torch.nn.functional as F
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
dataset_1yr = pd.read_csv("../../Data/all_stocks_5yr.csv")
dataset_1yr.head()
# Changing the date column to the datetime format (best format to work with time series)
dataset_1yr['Date'] = [dt.datetime.strptime(d,'%Y-%m-%d').date() for d in dataset_1yr['Date']]
dataset_1yr.head()
# Assigning a mid price column with the mean of the Highest and Lowest values
dataset_1yr['Mid'] = (dataset_1yr['High'] + dataset_1yr['Low'])/2
dataset_1yr.head()
# Getting rid of null columns
missing_data = pd.DataFrame(dataset_1yr.isnull().sum()).T
print(missing_data)
for index, column in enumerate(missing_data.columns):
if missing_data.loc[0][index] != 0:
dataset_1yr = dataset_1yr.drop(dataset_1yr.loc[dataset_1yr[column].isnull()].index)
missing_data = pd.DataFrame(dataset_1yr.isnull().sum()).T
print(missing_data)
# Let's analyze 3M stocks a bit deeper
MMM_stocks = dataset_1yr[dataset_1yr['Name'] == 'MMM']
MMM_stocks.head()
# Creating a percent change column related to the closing price
percent_change_closing_price = MMM_stocks['Close'].pct_change()
percent_change_closing_price.fillna(0, inplace=True)
MMM_stocks['PC_change'] = pd.DataFrame(percent_change_closing_price)
# As we want to predict the closing price, let's add the target column as the close price shifted by 1
MMM_stocks['Target'] = MMM_stocks['Close'].shift(-1)
MMM_stocks = MMM_stocks.drop(0, axis = 0)
MMM_stocks = MMM_stocks.drop('Name', axis = 1)
MMM_stocks = MMM_stocks.drop('Date', axis = 1)
MMM_stocks.head()
# Separating as Training and Testing
train_data = MMM_stocks.iloc[:1000,:]
train_data = train_data.drop('Target',axis=1)
test_data = MMM_stocks.iloc[1000:,:]
test_data = test_data.drop('Target',axis=1)
y_train = MMM_stocks.iloc[:1000,-1]
y_test = MMM_stocks.iloc[1000:,-1]
print(train_data.shape)
print(test_data.shape)
print(y_train.shape)
print(y_test.shape)
# Data still needs to be scaled.
# Training Data
scaler_closing_price = MinMaxScaler(feature_range = (0, 1))
scaler_closing_price.fit(np.array(train_data['Close']).reshape(-1,1))
scaler_dataframe = MinMaxScaler(feature_range = (0, 1))
training_set_scaled = pd.DataFrame(scaler_dataframe.fit_transform(train_data))
training_set_scaled.head()
y_set_scaled = pd.DataFrame(scaler_closing_price.transform(np.array(y_train).reshape(-1,1)))
# Testing Data
testing_set_scaled = pd.DataFrame(scaler_dataframe.fit_transform(test_data))
y_test_scaled = pd.DataFrame(scaler_closing_price.transform(np.array(y_test).reshape(-1,1)))
# Preparing data for the experiment with an univariate model
# Getting Closing Price and arranging lists for training/testing based on the sequence
def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
train_univariate, y_train_univariate = split_sequence(training_set_scaled[3], 5)
test_univariate, y_test_univariate = split_sequence(testing_set_scaled[3], 5)
def reshape_pandas_data(x, y, input_size):
x = torch.from_numpy(np.array(x)).type(torch.Tensor).view([-1, input_size])
y = torch.from_numpy(np.array(y)).type(torch.Tensor).view(-1)
return (x, y)
train_tensor, target = reshape_pandas_data(train_univariate, y_train_univariate, train_univariate.shape[1])
#train_tensor, target = reshape_pandas_data(train_data, y_train, train_data.shape[1])
print(train_tensor.shape)
#train_tensor = DataLoader(train_tensor, batch_size)
# Creating a device data loader in order to pass batches to device memory
def to_device(data, device):
''' Move tensor to chosen device'''
if isinstance(data, (list,tuple)):
return [to_device(x, device) for x in data]
return data.to(device, non_blocking=True)
class DeviceDataLoader():
def __init__(self, data, device):
self.data = data
self.device = device
def __iter__(self):
"Yield a batch of data after moving it to device"
for item in self.data:
yield to_device(item, self.device)
def __len__(self):
"Number of batches"
return len(self.data)
train_tensor = DeviceDataLoader(train_tensor, device)
#target = DeviceDataLoader(target, device)
# Recurrent neural network (many-to-one)
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers=1, num_classes=1):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, num_classes)
self.input_size = input_size
def forward(self, x):
print(x)
#x = x.reshape(-1, len(x), self.input_size)
# Set initial hidden and cell states
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
# Forward propagate LSTM
out, self.hidden = self.lstm(x, (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size)
# Decode the hidden state of the last time step
out = self.fc(out[:, -1, :])
return out
# Recurrent neural network (many-to-one)
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers=1, num_classes=1):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, num_classes)
self.input_size = input_size
def forward(self, x):
#x = x.reshape(-1, len(x), self.input_size)
# Set initial hidden and cell states
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
# Forward propagate LSTM
out, self.hidden = self.lstm(x, (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size)
# Decode the hidden state of the last time step
out = self.fc(out[:, -1, :])
return out
# Hyper-parameters
epochs = 50
input_size = 5
hidden_size = 128
num_layers = 2
num_classes = 1
learning_rate = 1e-3
model = RNN(input_size, hidden_size, num_layers, num_classes).to(device)
for t in model.parameters():
print(t.shape)
for index, tensor in enumerate(train_tensor):
y_pred = model(tensor)
loss += loss_fn(y_pred, target[index])
loss
hist = []
loss_fn = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for epoch in range(epochs):
#model.zero_grad()
loss = []
for tensor, real_output in train_tensor, target:
y_pred = model(tensor)
loss += loss_fn(y_pred, real_output)
y_pred = model(train_tensor)
loss += loss_fn(y_pred, target.to(device))
hist.append(loss.mean())
loss.backward()
optimizer.step()
optimizer.zero_grad()
#if epoch+1 % 10 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, epochs, loss.item()))
plt.plot(hist, '-x')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.title('Accuracy vs Epoch');
class TimeSeriesRNNModel(nn.Module):
def __init__(self):
super(TimeSeriesRNNModel, self).__init__()
self.lstm1 = nn.LSTM(input_size=5, hidden_size=50, num_layers=1)
self.lstm2 = nn.LSTM(input_size=50, hidden_size=25, num_layers=1)
self.linear = nn.Linear(in_features=25, out_features=1)
self.h_t1 = None
self.c_t1 = None
self.h_t2 = None
self.c_t2 = None
def initialize_model(self, input_data):
self.h_t1 = torch.rand(1, 1, 50, dtype=torch.double).to(device)
self.c_t1 = torch.rand(1, 1, 50, dtype=torch.double).to(device)
self.h_t2 = torch.rand(1, 1, 25, dtype=torch.double).to(device)
self.c_t2 = torch.rand(1, 1, 25, dtype=torch.double).to(device)
def forward(self, input_data):
outputs = []
self.initialize_model(input_data)
input_data = input_data.reshape(-1, len(input_data), 5)
output = None
for _, input_t in enumerate(input_data.chunk(input_data.size(1), dim=1)):
self.h_t1, self.c_t1 = self.lstm1(input_t, (self.h_t1, self.c_t1))
self.h_t2, self.c_t2 = self.lstm2(self.h_t1, (self.h_t2, self.c_t2))
output = self.linear(self.h_t2)
outputs += [output]
outputs = torch.stack(outputs, 1).squeeze(2)
return outputs
# Hyper-parameters
epochs = 20
learning_rate = 1e-3
model = TimeSeriesRNNModel().to(device)
for t in model.parameters():
print(t.shape)
hist = []
loss_fn = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for epoch in range(epochs):
#model.zero_grad()
y_pred = model(train_tensor.to(device))
loss = loss_fn(y_pred, target.to(device))
hist.append(loss.item())
loss.backward()
optimizer.step()
optimizer.zero_grad()
#if epoch+1 % 10 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, epochs, loss.item()))
```
|
github_jupyter
|
```
#Created 2021-09-08
#Copyright Spencer W. Leifeld
from pyspark.sql import SparkSession as Session
from pyspark import SparkConf as Conf
from pyspark import SparkContext as Context
import os
os.environ['SPARK_LOCAL_IP']='192.168.1.2'
os.environ['HADOOP_HOME']='/home/geno1664/Developments/Github_Samples/RDS-ENV/hadoop'
os.environ['LD_LIBRARY_PATH']='$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native'
os.environ['PYSPARK_DRIVER_PYTHON']='jupyter'
os.environ['PYSPARK_DRIVER_PYTHON_OPTS']='notebook'
os.environ['PYSPARK_PYTHON']='python3'
os.environ['PYARROW_IGNORE_TIMEZONE']='1'
configuration = Conf().setAppName('RDS_2').setMaster('spark://GenoMachine:7077')
configuration.set('spark.executor.memory','10G').set('spark.driver.memory', '2G').set('spark.cores.max', '8')
context = Context(conf=configuration)
session = Session(context)
from Functions.IO import CSV_File
csvDF = CSV_File(session, r'/home/geno1664/Developments/Github_Samples/RDS-ENV/Rural_Development_Study_No2/IO/Jobs.csv')
employmentByJob = csvDF.GetSparkDF().select('State', 'County', 'PctEmpAgriculture', 'PctEmpConstruction', 'PctEmpMining', 'PctEmpTrade', 'PctEmpTrans', \
'PctEmpInformation', 'PctEmpFIRE', 'PctEmpServices', 'PctEmpGovt', 'PctEmpManufacturing')
employmentByJob = employmentByJob.withColumnRenamed('PctEmpAgriculture', 'Farmers').withColumnRenamed('PctEmpConstruction', 'Builders').withColumnRenamed('PctEmpMining', 'Miners') \
.withColumnRenamed('PctEmpTrade', 'Retail_Associates').withColumnRenamed('PctEmpFIRE', 'Businessmen').withColumnRenamed('PctEmpServices', 'Hospitality_Associates') \
.withColumnRenamed('PctEmpGovt', 'Civil_Servants').withColumnRenamed('PctEmpManufacturing', 'Craftsmen').withColumnRenamed('PctEmpInformation', 'Technologists') \
.withColumnRenamed('PctEmpTrans', 'Teamsters')
employmentByJob = employmentByJob.where(employmentByJob.State != 'US')
employmentByJob = employmentByJob.repartition('State')
employmentByJob.show()
csvDF = CSV_File(session, r'/home/geno1664/Developments/Github_Samples/RDS-ENV/Rural_Development_Study_No2/IO/People.csv')
educationRate = csvDF.GetSparkDF().select('State', 'County', 'Ed1LessThanHSPct', 'Ed2HSDiplomaOnlyPct', 'Ed3SomeCollegePct', 'Ed4AssocDegreePct', 'Ed5CollegePlusPct')
educationRate = educationRate.withColumnRenamed('Ed1LessThanHSPct', 'Some_High_School').withColumnRenamed('Ed2HSDiplomaOnlyPct', 'High_School_Degree') \
.withColumnRenamed('Ed3SomeCollegePct', 'Some_College').withColumnRenamed('Ed4AssocDegreePct', 'Associates_Degree').withColumnRenamed('Ed5CollegePlusPct', 'College_Graduate')
educationRate = educationRate.where(educationRate.State != 'US')
educationRate = educationRate.repartition('State')
educationRate.show()
from databricks import koalas as ks
employmentByJob = employmentByJob.to_koalas().melt(id_vars=['State', 'County'], var_name='Employment_Catagory', value_name='Employment_Rate').to_spark()
educationRate = educationRate.to_koalas().melt(id_vars=['State', 'County'], var_name='Education_Catagory', value_name='Education_Rate').to_spark()
mainDF = employmentByJob.join(educationRate, on=['State', 'County'], how='cross') \
.fillna(0, subset=['Employment_Rate', 'Education_Rate']).fillna('NONE', subset=['Employment_Catagory', 'Education_Catagory'])
mainDF = mainDF.to_koalas()
mainDF['Hueristic'] = (mainDF['Employment_Rate'] / 100) * (mainDF['Education_Rate'] / 100) * 100
mainDF = mainDF.to_spark().select('Employment_Catagory', 'Education_Catagory', 'Hueristic')
degreeDF = mainDF.groupBy('Employment_Catagory').pivot('Education_Catagory').mean()
degreeDF.show()
degreeDF = degreeDF.to_koalas()
degreeDF['No_Post_Secondary_Degree'] = degreeDF['Some_High_School'] + degreeDF['High_School_Degree'] + degreeDF['Some_College']
degreeDF['Post_Secondary_Degree'] = degreeDF['College_Graduate'] + degreeDF['Associates_Degree']
degreeDF = degreeDF.drop(labels=['Some_High_School', 'High_School_Degree', 'College_Graduate', 'Some_College', 'Associates_Degree']).to_spark()
degreeDF.show()
print('Workplace Population Correlation between Degree and Non-Degree Holders: ' + str(degreeDF.corr('No_Post_Secondary_Degree', 'Post_Secondary_Degree')))
degreeDF.agg({'No_Post_Secondary_Degree':'sum', 'Post_Secondary_Degree':'sum'}).show()
employmentDF = mainDF.groupBy('Education_Catagory').pivot('Employment_Catagory').mean()
employmentDF = ks.melt(employmentDF.to_koalas(), id_vars='Education_Catagory', var_name='Employment_Catagory', value_name='Employment_Percentage').to_spark()
employmentDF.drop('Education_Catagory').groupBy('Employment_Catagory').sum().show()
```
|
github_jupyter
|
```
from __future__ import print_function
import torch
import numpy as np
from PIL import Image
x = torch.empty(5, 3)
print(x)
x = torch.rand(5, 3)
print(x)
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
x = torch.tensor([5.5, 3])
print(x)
x = x.new_ones(5, 3, dtype=torch.double)
print(x)
print(x.dtype)
x = torch.randn_like(x, dtype=torch.float)
print(x.dtype)
print(x)
print(x.shape)
print(x)
print('\n')
y = torch.rand(5, 3)
print(x + y)
print(torch.add(x, y))
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
x = torch.randn(1)
print(x)
print(x.item())
x[0].item()
a = torch.ones(5)
print(a)
b = a.numpy()
print(b)
a.add_(1)
print(b)
print(a)
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!
```
### Automatic gradient derivation
```
x = torch.ones(2, 2, requires_grad=True)
print(x)
y = x + 2
print(y)
print(y.grad_fn)
z = y*y*3
out = z.mean()
print(z)
print(out)
# gradients
out.backward()
print(x.grad)
print(x)
c = torch.ones(2, 2, requires_grad=True)
d = c.sum()
d.backward()
print(c.grad)
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
x = torch.ones(1, 1, requires_grad=True)
c = x.add(3)
y = c**2
y.backward()
c.backward()
in_ = torch.tensor(np.arange(5, dtype='float32'))
W1 = torch.rand(5, 5, requires_grad=True)
layer2 = in_.matmul(W1)
W2 = torch.rand(5, 5, requires_grad=True)
layer3 = layer2.matmul(W2)
W3 = torch.rand(5, requires_grad=True)
d = layer3.dot(W3)
loss = d.sum()
# print(loss.backward())
loss.backward()
print(W1.grad)
```
### Simple full layer neural network Example
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import math
import torch
from torch.optim import Optimizer
class Adam(Optimizer):
def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8,
weight_decay=0, amsgrad=False):
if not 0.0 <= lr:
raise ValueError("Invalid learning rate: {}".format(lr))
if not 0.0 <= eps:
raise ValueError("Invalid epsilon value: {}".format(eps))
if not 0.0 <= betas[0] < 1.0:
raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0]))
if not 0.0 <= betas[1] < 1.0:
raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1]))
defaults = dict(lr=lr, betas=betas, eps=eps,
weight_decay=weight_decay, amsgrad=amsgrad)
super(Adam, self).__init__(params, defaults)
def __setstate__(self, state):
super(Adam, self).__setstate__(state)
for group in self.param_groups:
group.setdefault('amsgrad', False)
def step(self, closure=None, score=1):
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
for p in group['params']:
if p.grad is None:
continue
grad = p.grad.data
if grad.is_sparse:
raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')
amsgrad = group['amsgrad']
state = self.state[p]
# State initialization
if len(state) == 0:
state['step'] = 0
# Exponential moving average of gradient values
state['exp_avg'] = torch.zeros_like(p.data)
# Exponential moving average of squared gradient values
state['exp_avg_sq'] = torch.zeros_like(p.data)
if amsgrad:
# Maintains max of all exp. moving avg. of sq. grad. values
state['max_exp_avg_sq'] = torch.zeros_like(p.data)
exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq']
if amsgrad:
max_exp_avg_sq = state['max_exp_avg_sq']
beta1, beta2 = group['betas']
state['step'] += 1
if group['weight_decay'] != 0:
grad.add_(group['weight_decay'], p.data)
# Decay the first and second moment running average coefficient
exp_avg.mul_(beta1).add_(1 - beta1, grad)
exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
if amsgrad:
# Maintains the maximum of all 2nd moment running avg. till now
torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
# Use the max. for normalizing running avg. of gradient
denom = max_exp_avg_sq.sqrt().add_(group['eps'])
else:
denom = exp_avg_sq.sqrt().add_(group['eps'])
bias_correction1 = 1 - beta1 ** state['step']
bias_correction2 = 1 - beta2 ** state['step']
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
p.data.addcdiv_(step_size*score, exp_avg, denom)
return loss
class PolicyNetwork(nn.Module):
def __init__(self, input_dim, learning_rate):
super(PolicyNetwork, self).__init__()
drop = 0.5
self.classifier = nn.Sequential(
nn.BatchNorm1d(num_features=input_dim),
nn.Linear(input_dim, 512),
nn.BatchNorm1d(512),
nn.LeakyReLU(),
nn.Dropout(drop),
nn.Linear(512, 256),
nn.BatchNorm1d(256),
nn.LeakyReLU(),
nn.Dropout(drop),
nn.Linear(256, 128),
nn.BatchNorm1d(128),
nn.LeakyReLU(),
nn.Dropout(drop),
nn.Linear(128, 10),
nn.BatchNorm1d(10),
nn.LeakyReLU(),
nn.Linear(10, 1),
nn.Sigmoid()
)
self.optimizer = Adam(self.parameters(), lr=learning_rate)
self.loss_fn = torch.nn.MSELoss()
def forward(self, x):
print(x.size())
x = self.classifier(x)
return x
# def __init__(self, input_dim, learning_rate):
# super(PolicyNetwork, self).__init__()
# self.h1 = nn.Linear(input_dim, 20)
# self.h2 = nn.Linear(20, 10)
# self.out = nn.Linear(10, 1)
# self.optimizer = optim.Adam(self.parameters(), lr=learning_rate)
# self.loss_fn = torch.nn.MSELoss()
# def forward(self, x):
# x = F.leaky_relu(self.h1(x))
# x = F.leaky_relu(self.h2(x))
# x = F.leaky_relu(self.out(x))
# return x
def train(self, X, y, n_steps, batch_size=64):
N = X.shape[0]
batch_size=min(batch_size, N)
for t in range(n_steps):
batch_indices = np.random.randint(0, N, batch_size)
y_pred = self.forward(X[batch_indices])
# Compute and print loss.
loss = self.loss_fn(y_pred, y[batch_indices])
# print(loss.item())
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable
# weights of the model). This is because by default, gradients are
# accumulated in buffers( i.e, not overwritten) whenever .backward()
# is called. Checkout docs of torch.autograd.backward for more details.
self.optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to model
# parameters
loss.backward()
# Calling the step function on an Optimizer makes an update to its
# parameters
self.optimizer.step()
net = PolicyNetwork(4, 0.01)
inputs = np.array(
[[1,2,3,4],
[1,2,3,4],
[1,2,3,4],
[1,2,3,4]], dtype='float64'
)
targets = np.array([
1,
1,
1,
1
], dtype='float64')
print(net(torch.from_numpy(inputs).float()))
net.train(torch.from_numpy(inputs).float(), torch.from_numpy(targets).float().view(-1, 1), n_steps=1000, batch_size=2)
print(net(torch.from_numpy(inputs).float()))
arr = np.array([[1, 2],[3, 4]])
ar2 = np.array([[1, 2],[3, 4]])
tensor = torch.from_numpy(arr).float()
tensor2 = torch.from_numpy(ar2).float()
# print(tensor.size())
# batchNorm = nn.LayerNorm(4)
# batchNorm(tensor)
torch.cat([tensor, tensor2], 0)
net(torch.from_numpy(inputs).float())
print(net.h1.bias.grad)
loss.backward()
print(net.h1.bias.grad)
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
print(loss.backward())
```
|
github_jupyter
|
# Fourier Transforms
```
import numpy as np
from scipy.integrate import quad
import matplotlib.pyplot as plt
```
## Part 1: The Discrete Fourier Transform
We’re about to make the transition from Fourier series to the Fourier transform. “Transition” is the
appropriate word, for in the approach we’ll take the Fourier transform emerges as we pass from periodic
to nonperiodic functions. To make the trip we’ll view a nonperiodic function (which can be just about
anything) as a limiting case of a periodic function as the period becomes longer and longer.
We're going to start by creating a pulse function. Let's start with the following pulse function:
```
def pulseFunction(x):
return 1/(3 + (x-20)**2)
x = np.linspace(-10, 50, 200)
plt.plot(x, pulseFunction(x))
plt.plot(np.zeros(100), np.linspace(0, 0.5, 100), "--")
plt.plot(np.ones(100) *40, np.linspace(0, 0.5, 100), "--")
plt.show()
```
### Step 1: Periodic Pulse Function
Take the `pulseFunction` above and make it periodic. Give it a variable period length (we will eventually make this 40 as shown by the vertical dotted lines above).
```
def periodicPulseFunction(x, period):
"""
x : the x values to consider
period : the period of the function
"""
n = x // period
x = x - n*period
return pulseFunction(x)
```
Plot your `periodicPulseFunction` with a period of $40$ from $-100$ to $100$ and check that it is correctly
```
## TO DO: Plot your periodicPulseFunction with a period of 40 from x = -100 to x = 100
x = np.linspace(-100,100,1000)
plt.plot(x,periodicPulseFunction(x, 40))
```
### Step 2: Define the Fourier Series
This function is neither odd nor even, so we're going to have to take into account both the the even coefficients $a_k$ and the odd coefficients $b_k$.
$$ f(x) = \sum\limits_{k=0}^{\infty} a_k cos\left(\frac{2\pi k x}{T}\right) + b_k sin\left(\frac{2\pi k x}{T}\right) $$
Complete the `fourierSeriesSum` that calculates the summation described above.
```
def fourierSeriesSum(k, ak, bk, x, period):
"""
Parameters:
k : the maximum k value to include in the summation above
ak : an array of length 'k' containing the even coefficients (from a_0 to a_(k-1))
bk : an array of length 'k' containing the odd coefficients (from b_0 to b_(k-1))
x : an array of the x values to consider
period : the period of the function
"""
sum = 0
for i in range(k+1):
sum = sum + ak[i]*np.cos(2*np.pi*i*x/period) + bk[i]*np.sin(2*np.pi*i*x/period)
return sum
```
### Step 3: Define the Integrands
Because we have both even and odd terms, we're going to have two separate integrals:
The integral to solve for the even terms:
$$ a_k = \frac{1}{T} \int\limits_{0}^{T} f(x, \text{period}) \cos\left(\frac{2\pi k x}{T} \right) dx$$
The integral to solve for the odd terms:
$$ b_k = \frac{1}{T} \int\limits_{0}^{T} f(x, \text{period}) \sin\left(\frac{2\pi k x}{T} \right) dx$$
```
def odd_integrand(x, f, k, period):
"""
Parameters:
x: the x values to consider
f: the function f(x, period) used in the integral
k: the k value to use
period: the period of f
"""
integrand = (1/period)*f(x,period)*np.sin(2*np.pi*k*x/period)
return integrand
def even_integrand(x, f, k, period):
"""
Parameters:
x: the x values to consider
f: the function f(x, period) used in the integral
k: the k value to use
period: the period of f
"""
integrand = (1/period)*f(x,period)*np.cos(2*np.pi*k*x/period)
return integrand
```
### Step 4: Find the Fourier Coefficients
Ok! Now it's time to find the coefficients. This is the same process as last time:
1. Initialize an $a_k$ and $b_k$ array
2. Loop through all the $k$ values
3. Find $a_k[i]$ and $b_k[i]$ where i $\in [0, k]$
4. Return $a_k$ and $b_k$
(At the end of your quad function, add "limit = 100" as an argument)
```
def findFourierCoefficients(f, k, period):
"""
Parameters:
f: the function to evaluate
k: the maximum k value to consider
period: the period of f
"""
ak = np.array([])
for i in range(k+1):
temp = quad(even_integrand, 0, period, args=(f,i,period,), limit = 100)
ak = np.append(ak, temp[0])
bk = np.array([])
for i in range(k+1):
temp = quad(odd_integrand, 0, period, args=(f,i,period,), limit = 100)
bk = np.append(bk, temp[0])
return ak, bk
```
### Step 5: Putting it all Together
Let's test it out!
```
k = 100
period = 40
[ak, bk] = findFourierCoefficients(periodicPulseFunction, k, period)
y = fourierSeriesSum(k, ak, bk, x, period)
plt.plot(x, y)
plt.title("Pulse Function Constructed from Fourier Series")
plt.show()
```
### Step 6: Analyzing the Signal
Let's visualize what the coeffcients look like.
Plot the even coefficients ($a_k$ versus $k$).
```
# TO DO: Plot ak versus k
k = 100
period = 40
x = np.linspace(0,100,101)
plt.plot(x, findFourierCoefficients(periodicPulseFunction, k, period)[0])
# print(findFourierCoefficients(periodicPulseFunction, k, period)[0])
```
Plot the odd coefficients ($b_k$ versus $k$).
```
# TO DO: Plot bk versus k
k = 100
period = 40
x = np.linspace(0,100,101)
plt.plot(x, findFourierCoefficients(periodicPulseFunction, k, period)[1])
```
## Part 2: Application
### Option 1
Below I've imported and plotted a signal for you. Break down this signal into sines and cosines, and plot the coefficients ($a_k$ versus $k$ and $b_k$ versus $k$)
```
x, y = np.loadtxt("signal.txt", unpack=True)
plt.figure(figsize=(15, 5))
plt.plot(x, y)
plt.show()
print(len(x))
print(x)
period = 15.70796327
def periodicFunction(inp):
"""
inp : point to find value for
"""
while inp > period:
inp = inp - period
for i in range(1000):
if (abs(inp - x[i]) < 0.0079):
return y[i]
def odd_integrand(x, f, k, period):
"""
Parameters:
x: the x values to consider
f: the function f(x, period) used in the integral
k: the k value to use
period: the period of f
"""
integrand = (1/period)*f(x)*np.sin(2*np.pi*k*x/period)
return integrand
def even_integrand(x, f, k, period):
"""
Parameters:
x: the x values to consider
f: the function f(x, period) used in the integral
k: the k value to use
period: the period of f
"""
integrand = (1/period)*f(x)*np.cos(2*np.pi*k*x/period)
return integrand
def findFourierCoefficients2(f, k, period):
"""
Parameters:
f: the function to evaluate
k: the maximum k value to consider
period: the period of f
"""
ak = np.array([])
for i in range(k+1):
temp = quad(even_integrand, 0, period, args=(f,i,period), limit = 100)
ak = np.append(ak, temp[0])
bk = np.array([])
for i in range(k+1):
temp = quad(odd_integrand, 0, period, args=(f,i,period), limit = 100)
bk = np.append(bk, temp[0])
return ak, bk
t = np.linspace(0,100,101)
plt.plot(t, findFourierCoefficients(periodicFunction, 100, period)[0])
plt.plot(t, findFourierCoefficients(periodicFunction, 100, period)[1])
```
### Option 2
Find a signal from real data, and find the cosines and sines values that comprise that signal.
|
github_jupyter
|
---
_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._
---
# Assignment 2 - Introduction to NLTK
In part 1 of this assignment you will use nltk to explore the Herman Melville novel Moby Dick. Then in part 2 you will create a spelling recommender function that uses nltk to find words similar to the misspelling.
## Part 1 - Analyzing Moby Dick
```
import nltk
import pandas as pd
import numpy as np
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
# If you would like to work with the raw text you can use 'moby_raw'
with open('moby.txt', 'r') as f:
moby_raw = f.read()
# If you would like to work with the novel in nltk.Text format you can use 'text1'
moby_tokens = nltk.word_tokenize(moby_raw)
text1 = nltk.Text(moby_tokens)
```
### Example 1
How many tokens (words and punctuation symbols) are in text1?
*This function should return an integer.*
```
def example_one():
return len(nltk.word_tokenize(moby_raw)) # or alternatively len(text1)
```
### Example 2
How many unique tokens (unique words and punctuation) does text1 have?
*This function should return an integer.*
```
def example_two():
return len(set(nltk.word_tokenize(moby_raw))) # or alternatively len(set(text1))
```
### Example 3
After lemmatizing the verbs, how many unique tokens does text1 have?
*This function should return an integer.*
```
from nltk.stem import WordNetLemmatizer
def example_three():
lemmatizer = WordNetLemmatizer()
lemmatized = [lemmatizer.lemmatize(w,'v') for w in text1]
return len(set(lemmatized))
```
### Question 1
What is the lexical diversity of the given text input? (i.e. ratio of unique tokens to the total number of tokens)
*This function should return a float.*
```
from nltk.stem import WordNetLemmatizer
def answer_one():
lemmatizer = WordNetLemmatizer() #instantiate WordNetLemmatizer
lemmatized = [lemmatizer.lemmatize(w) for w in text1] # use list comprehension to lemmatize each work in text1
total = len(lemmatized) #calculate the total lemmas
unique = len(set(lemmatized)) #calclulate the number of unique lemmas
lexical_diversity = unique/total #calculate the ratio of unique lemmas to total lemmas
return lexical_diversity
answer_one()
```
### Question 2
What percentage of tokens is 'whale'or 'Whale'?
*This function should return a float.*
```
from nltk.stem import WordNetLemmatizer
def answer_two():
tokens = nltk.word_tokenize(moby_raw) #tokenize moby_raw
dist = nltk.FreqDist(tokens) #calculate the distribution of tokens
whale_freq = dist.freq("whale")
Whale_freq = dist.freq("Whale")
#access the frequency distributions for 'whale' and 'Whale'
total_freq = (whale_freq + Whale_freq)*100
#add the distributions, multiply by 100 for a percentage
return total_freq
answer_two()
```
### Question 3
What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?
*This function should return a list of 20 tuples where each tuple is of the form `(token, frequency)`. The list should be sorted in descending order of frequency.*
```
def answer_three():
tokens = nltk.word_tokenize(moby_raw) #tokenize moby_raw
# top_20 = dist.pformat(maxlen=20)
# top_20_split = top_20.split(',')
# word = []
# count = []
# for i,value in enumerate(top_20_split):
# word.append(top_20_split[i].split(':')[0].replace("'","").lstrip().rstrip())
# word.pop(0)
# word.insert(0,',')
# word.pop(21)
# word.pop(1)
# #word.pop(19)
# #word.insert(19,"''")
# for i in range(len(top_20_split)-2):
# count.append(int(top_20_split[i+1].split(':')[1].lstrip().rstrip()))
# merged_list= list(zip(word,count))
## Took the scenic route the first go around
top_20 = (nltk.FreqDist(tokens)
.most_common(20)) #calculate the distribution of tokens and return the top 20
return top_20
answer_three()
```
### Question 4
What tokens have a length of greater than 5 and frequency of more than 150?
*This function should return an alphabetically sorted list of the tokens that match the above constraints. To sort your list, use `sorted()`*
```
def answer_four():
tokens = nltk.word_tokenize(moby_raw) #tokenize moby_raw
dist = nltk.FreqDist(tokens) #calculate the distribution of tokens
fivechars_onefiftytimes = [w for w in dist if len(w) > 5 and dist[w] > 150]
# use list comprehension to return tokens greater than length 5 that have a frequency greater than 150
fivechars_onefiftytimes.sort()
return fivechars_onefiftytimes
```
### Question 5
Find the longest word in text1 and that word's length.
*This function should return a tuple `(longest_word, length)`.*
```
def answer_five():
lemmatizer = nltk.WordNetLemmatizer() #instantiate WordNetLemmatizer
lemmatized = [lemmatizer.lemmatize(w) for w in text1] #create a list of lemmas using the lemmatizer instance
dist = nltk.FreqDist(lemmatized) #calculate the distribution of lemmas
# first_answer = ([w for w in lemmatized if len(w) > 22][0],
# len([w for w in lemmatized if len(w) > 22][0]))
# #guess and check method using a list comprehension
longest_word_length = 0
longest_word = ""
for i, word in enumerate(lemmatized):
if len(word) > longest_word_length:
longest_word_length = len(word)
longest_word = word
else:
pass
#loop over list of lemmas and store the running longest word length along with the actual word
answer = (longest_word, longest_word_length)
return answer
answer_five()
```
### Question 6
What unique words have a frequency of more than 2000? What is their frequency?
"Hint: you may want to use `isalpha()` to check if the token is a word and not punctuation."
*This function should return a list of tuples of the form `(frequency, word)` sorted in descending order of frequency.*
```
def answer_six():
tokens = nltk.word_tokenize(moby_raw) #instantiate WordNetLemmatizer
dist = nltk.FreqDist(tokens) #calculate the distribution of lemmas
token = [w for w in dist if w.isalpha() and dist[w] > 2000]
#use list comprehension to return tokens that are words and have a frequency greater than 2000
count = [dist[w] for w in dist if w.isalpha() and dist[w] > 2000]
#use list comprehension to return the count for the tokens returned in the above list comprehension
combined = list(zip(count, token)) #combine the two lists
combined.sort(reverse = True, key = lambda x: x[0]) #sort the list by the first entry in the tuple
return combined
answer_six()
```
### Question 7
What is the average number of tokens per sentence?
*This function should return a float.*
```
def answer_seven():
sentences = nltk.sent_tokenize(moby_raw) #instantiate a sentence tokenizer instance
text_sentences = nltk.Text(sentences)
counter = 0
sum_total = 0
for i,sentence in enumerate(sentences):
sum_total += len(nltk.word_tokenize(sentence))
#loop through sentences and calculate a running total of sentence lengths
counter += 1
average = sum_total/counter
#return sum of sentences lengths over count of sentences; average sentence length
return average
answer_seven()
```
### Question 8
What are the 5 most frequent parts of speech in this text? What is their frequency?
*This function should return a list of tuples of the form `(part_of_speech, frequency)` sorted in descending order of frequency.*
```
def answer_eight():
tokens = nltk.word_tokenize(moby_raw) #instantiate a sentence tokenizer instance
tags = nltk.pos_tag(tokens) #create list of tokens with their respective parts of speech
pos_tag = []
for i,tag in enumerate(tags):
pos_tag.append(tag[1])
#create a list of all parts of speech in moby raw
dist = nltk.FreqDist(pos_tag) #calculate the frequency distribution of the pos tags
freq = [dist[w] for w in dist] #create a list of frequency counts from dist
pos_tag_consolidated = [w for w in dist] #create a list of parts of speech from dist
zipped = list(zip(pos_tag_consolidated,freq))
zipped.sort(reverse = True, key = lambda x:x[1])
top_5 = zipped[:5]
return top_5
answer_eight()
```
## Part 2 - Spelling Recommender
For this part of the assignment you will create three different spelling recommenders, that each take a list of misspelled words and recommends a correctly spelled word for every word in the list.
For every misspelled word, the recommender should find find the word in `correct_spellings` that has the shortest distance*, and starts with the same letter as the misspelled word, and return that word as a recommendation.
*Each of the three different recommenders will use a different distance measure (outlined below).
Each of the recommenders should provide recommendations for the three default words provided: `['cormulent', 'incendenece', 'validrate']`.
```
from nltk.corpus import words
nltk.download('words')
correct_spellings = words.words() # list of correctly spelled words
```
### Question 9
For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
**[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the trigrams of the two words.**
*This function should return a list of length three:
`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
```
def answer_nine(entries=['cormulent', 'incendenece', 'validrate']):
entries = entries
jaccard_list = []
for i in range(len(entries)):
jaccard_dict = {}
for j,word in enumerate(correct_spellings):
jaccard_dict[word] = nltk.jaccard_distance(set(nltk.ngrams(word,n=3)),
set(nltk.ngrams(entries[i],n=3)))
jaccard_list.append(jaccard_dict)
#put dictionaries in a list
for i in range(len(jaccard_list)):
jaccard_list[i] = {k: v for k, v in sorted(jaccard_list[i].items(), key = lambda item: item[1])}
#sort each dictionary in jaccard list by the jaccard distance
answer = []
for i in range(len(jaccard_list)):
for key in jaccard_list[i]:
if key.startswith(entries[i][0]):
print("{0}: {1:.2f}".format(key,jaccard_list[i][key]))
answer.append(key)
break
#return the word with the smallest jaccard distance that starts with the same letter as the misspelled word
return answer
answer_nine()
```
### Question 10
For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
**[Jaccard distance](https://en.wikipedia.org/wiki/Jaccard_index) on the 4-grams of the two words.**
*This function should return a list of length three:
`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
```
def answer_ten(entries=['cormulent', 'incendenece', 'validrate']):
#same as above, but for 4-grams instead of 3-grams
entries = entries
jaccard_list = []
for i in range(len(entries)):
jaccard_dict = {}
for j,word in enumerate(correct_spellings):
jaccard_dict[word] = nltk.jaccard_distance(set(nltk.ngrams(word,n=4)),
set(nltk.ngrams(entries[i],n=4)))
jaccard_list.append(jaccard_dict)
#put dictionaries in list
for i in range(len(jaccard_list)):
jaccard_list[i] = {k: v for k, v in sorted(jaccard_list[i].items(), key = lambda item: item[1])}
#sort each dictionary in jaccard list by the jaccard distance
answer = []
for i in range(len(jaccard_list)):
for key in jaccard_list[i]:
if key.startswith(entries[i][0]):
print("{0}: {1:.2f}".format(key,jaccard_list[i][key]))
answer.append(key)
break
#return the word with the smallest jaccard distance that starts with the same letter as the misspelled word
#same as above
return answer
answer_ten()
```
### Question 11
For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
**[Edit distance on the two words with transpositions.](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance)**
*This function should return a list of length three:
`['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']`.*
```
def answer_eleven(entries=['cormulent', 'incendenece', 'validrate']):
#use edit distance instead of Jaccard distance to recommend correct spelling
edit_list = []
for i in range(len(entries)):
edit_dict = {}
for j,word in enumerate(correct_spellings):
edit_dict[word] = nltk.edit_distance(word,entries[i])
edit_list.append(edit_dict)
#put dictionaries in list
for i in range(len(edit_list)):
edit_list[i] = {k: v for k, v in sorted(edit_list[i].items(), key = lambda item: item[1])}
#sort each dictionary in jaccard list by the edit distance
answer = []
for i in range(len(edit_list)):
for key in edit_list[i]:
if key.startswith(entries[i][0]):
print("{0}: {1:.1f}".format(key,edit_list[i][key]))
answer.append(key)
break
#return the word with the smallest edit distance that starts with the same letter as the misspelled word
return answer
answer_eleven()
```
|
github_jupyter
|
# Documentation
> 201025: This notebook generate embedding vectors for pfam_motors, df_dev, and motor_toolkit from the models that currently finished training:
- lstm5
- evotune_lstm_5_balanced.pt
- evotune_lstm_5_balanced_target.pt
- mini_lstm_5_balanced.pt
- mini_lstm_5_balanced_target.pt
- transformer_encoder
- evotune_seq2seq_encoder_balanced.pt
- evotune_seq2seq_encoder_balanced_target.pt
- mini_seq2seq_encoder_balanced.pt
- mini_seq2seq_encoder_balanced_target.pt
- seq2seq_attention_mini
- transformer_encoder_201025.pt
- evotune_transformerencoder_balanced.pt
- evotune_transformerencoder_balanced_target.pt
- mini_evotune_transformerencoder_balanced.pt
- mini_evotune_transformerencoder_balanced_target.pt
- output for motor_toolkit,pfamA_random, and pfamA_motors
```
import math
import torch.nn as nn
import argparse
import random
import warnings
import numpy as np
import torch
import torch.nn.functional as F
from torch import optim
import torch.backends.cudnn as cudnn
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
from torch.autograd import Variable
import itertools
import pandas as pd
from torch.nn import TransformerEncoder, TransformerEncoderLayer
import math
seed = 7
torch.manual_seed(seed)
np.random.seed(seed)
pfamA_motors = pd.read_csv("../../data/pfamA_motors.csv")
pfamA_random = pd.read_csv("../../data/pfamA_random_201027.csv")
motor_toolkit = pd.read_csv("../../data/motor_tookits.csv")
pfamA_motors_balanced = pfamA_motors.groupby('clan').apply(lambda _df: _df.sample(4500,random_state=1))
pfamA_motors_balanced = pfamA_motors_balanced.apply(lambda x: x.reset_index(drop = True))
pfamA_target_name = ["PF00349","PF00022","PF03727","PF06723",\
"PF14450","PF03953","PF12327","PF00091","PF10644",\
"PF13809","PF14881","PF00063","PF00225","PF03028"]
pfamA_target = pfamA_motors.loc[pfamA_motors["pfamA_acc"].isin(pfamA_target_name),:]
aminoacid_list = [
'A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L',
'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y'
]
clan_list = ["actin_like","tubulin_c","tubulin_binding","p_loop_gtpase"]
aa_to_ix = dict(zip(aminoacid_list, np.arange(1, 21)))
clan_to_ix = dict(zip(clan_list, np.arange(0, 4)))
def word_to_index(seq,to_ix):
"Returns a list of indices (integers) from a list of words."
return [to_ix.get(word, 0) for word in seq]
ix_to_aa = dict(zip(np.arange(1, 21), aminoacid_list))
ix_to_clan = dict(zip(np.arange(0, 4), clan_list))
def index_to_word(ixs,ix_to):
"Returns a list of words, given a list of their corresponding indices."
return [ix_to.get(ix, 'X') for ix in ixs]
def prepare_sequence(seq):
idxs = word_to_index(seq[0:-1],aa_to_ix)
return torch.tensor(idxs, dtype=torch.long)
def prepare_labels(seq):
idxs = word_to_index(seq[1:],aa_to_ix)
return torch.tensor(idxs, dtype=torch.long)
def prepare_eval(seq):
idxs = word_to_index(seq[:],aa_to_ix)
return torch.tensor(idxs, dtype=torch.long)
prepare_labels('YCHXXXXX')
# set device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
class PositionalEncoding(nn.Module):
"""
PositionalEncoding module injects some information about the relative or absolute position of
the tokens in the sequence. The positional encodings have the same dimension as the embeddings
so that the two can be summed. Here, we use sine and cosine functions of different frequencies.
"""
def __init__(self, d_model, dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0).transpose(0, 1)
# print("self.pe.size() :", self.pe[:x.size(0),:,:].size())
# pe[:, 0::2] = torch.sin(position * div_term)
# pe[:, 1::2] = torch.cos(position * div_term)
# pe = pe.unsqueeze(0)
self.register_buffer('pe', pe)
def forward(self, x):
# x = x + self.pe[:x.size(0), :]
x = x.unsqueeze(0).transpose(0, 1)
# print("x.size() : ", x.size())
# print("self.pe.size() :", self.pe[:x.size(0),:,:].size())
x = torch.add(x ,Variable(self.pe[:x.size(0),:,:], requires_grad=False))
return self.dropout(x)
class TransformerModel(nn.Module):
def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5):
super(TransformerModel, self).__init__()
from torch.nn import TransformerEncoder, TransformerEncoderLayer
self.model_type = 'Transformer'
self.src_mask = None
self.pos_encoder = PositionalEncoding(ninp)
encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)
self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
self.encoder = nn.Embedding(ntoken, ninp)
self.ninp = ninp
self.decoder = nn.Linear(ninp, ntoken)
self.init_weights()
def _generate_square_subsequent_mask(self, sz):
mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange, initrange)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)
def forward(self, src):
# if self.src_mask is None or self.src_mask.size(0) != src.size(0):
# device = src.device
# mask = self._generate_square_subsequent_mask(src.size(0)).to(device)
# self.src_mask = mask
# print("src.device: ", src.device)
src = self.encoder(src) * math.sqrt(self.ninp)
# print("self.encoder(src) size: ", src.size())
src = self.pos_encoder(src)
# print("elf.pos_encoder(src) size: ", src.size())
output_encoded = self.transformer_encoder(src, self.src_mask)
output_encoded = self.transformer_encoder(src)
# print("output_encoded size: ", output_encoded.size())
output = self.decoder(output_encoded)
return output,output_encoded
ntokens = len(aminoacid_list) + 1 # the size of vocabulary
emsize = 12 # embedding dimension
nhid = 100 # the dimension of the feedforward network model in nn.TransformerEncoder
nlayers = 6 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder
nhead = 12 # the number of heads in the multiheadattention models
dropout = 0.1 # the dropout value
model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, dropout).to(device)
model.eval()
def generate_embedding_transformer(model,dict_file,dat,dat_name,out_path,out_dir,seq_col):
# initialize network
model.load_state_dict(torch.load(dict_file))
print("loaded dict file for weights " + dict_file)
print("output embedding for " + dat_name)
hn_vector = []
print_every = 1000
for epoch in np.arange(0, dat.shape[0]):
with torch.no_grad():
seq = dat.iloc[epoch, seq_col]
if len(seq) > 5000:
continue
sentence_in = prepare_eval(seq)
sentence_in = sentence_in.to(device = device)
_,hn = model(sentence_in)
hn = hn.sum(dim = 0).cpu().detach().numpy()
hn_vector.append(hn)
if epoch % print_every == 0:
print(f"At Epoch: %.2f"% epoch)
print(seq)
hn_vector = np.array(hn_vector)
hn_vector = np.squeeze(hn_vector, axis=1)
print(hn_vector.shape)
print(out_dir+dat_name+"_"+out_path)
np.save(out_dir+dat_name+"_"+out_path, hn_vector)
return
dict_files = ["evotune_transformer_encoder_mlm_balanced_target.pt","evotune_transformer_encoder_mlm_balanced.pt","evotune_transformerencoder_balanced_target.pt",\
"evotune_transformerencoder_balanced.pt","mini_transformer_encoder_mlm_balanced_target.pt","mini_transformer_encoder_mlm_balanced.pt",\
"mini_transformerencoder_balanced_target.pt","mini_transformerencoder_balanced.pt","transformer_encoder_201025.pt","transformer_encoder_mlm_201025.pt"]
dict_files = ["../../data/201025/"+dict_file for dict_file in dict_files]
dict_files
# "../data/hn_lstm5_motortoolkit.npy"
out_paths = ["mlm_evotune_balanced_target.npy","mlm_evotune_balanced.npy","evotune_balanced_target.npy",\
"evotune_balanced.npy", "mlm_mini_balanced_target.npy","mlm_mini_balanced.npy",\
"mini_balanced_target.npy", "mini_balanced.npy", "raw.npy","mlm_raw.npy"]
out_dir = "../../out/201027/embedding/transformer_encoder/"
out_paths
len(dict_files)==len(out_paths)
pfamA_target.iloc[1,3]
data = [pfamA_motors_balanced,pfamA_target,pfamA_random,motor_toolkit]
data_names = ["pfamA_motors_balanced", "pfamA_target" , "pfamA_random", "motor_toolkit"]
seq_cols = [3,3,2,7]
for i in range(len(dict_files)):
dict_file = dict_files[i]
out_path = out_paths[i]
for i in range(len(data)):
dat = data[i]
dat_name = data_names[i]
seq_col = seq_cols[i]
generate_embedding_transformer(model,dict_file,dat,dat_name,out_path,out_dir,seq_col)
```
|
github_jupyter
|
```
# 8.3.3. Natural Language Statistics
import random
import torch
from d2l import torch as d2l
tokens = d2l.tokenize(d2l.read_time_machine())
# Since each text line is not necessarily a sentence or a paragraph, we
# concatenate all text lines
corpus = [token for line in tokens for token in line]
vocab = d2l.Vocab(corpus)
vocab.token_freqs[:10]
freqs = [freq for token, freq in vocab.token_freqs]
d2l.plot(freqs, xlabel='token: x', ylabel='frequency: n(x)', xscale='log',
yscale='log')
bigram_tokens = [pair for pair in zip(corpus[:-1], corpus[1:])]
bigram_vocab = d2l.Vocab(bigram_tokens)
bigram_vocab.token_freqs[:10]
trigram_tokens = [
triple for triple in zip(corpus[:-2], corpus[1:-1], corpus[2:])]
trigram_vocab = d2l.Vocab(trigram_tokens)
trigram_vocab.token_freqs[:10]
bigram_freqs = [freq for token, freq in bigram_vocab.token_freqs]
trigram_freqs = [freq for token, freq in trigram_vocab.token_freqs]
d2l.plot([freqs, bigram_freqs, trigram_freqs], xlabel='token: x',
ylabel='frequency: n(x)', xscale='log', yscale='log',
legend=['unigram', 'bigram', 'trigram'])
# 8.3.4. Reading Long Sequence Data
# 8.3.4.1. Random Sampling
def seq_data_iter_random(corpus, batch_size, num_steps): #@save
"""Generate a minibatch of subsequences using random sampling."""
# Start with a random offset (inclusive of `num_steps - 1`) to partition a
# sequence
corpus = corpus[random.randint(0, num_steps - 1):]
# Subtract 1 since we need to account for labels
num_subseqs = (len(corpus) - 1) // num_steps
# The starting indices for subsequences of length `num_steps`
initial_indices = list(range(0, num_subseqs * num_steps, num_steps))
# In random sampling, the subsequences from two adjacent random
# minibatches during iteration are not necessarily adjacent on the
# original sequence
random.shuffle(initial_indices)
def data(pos):
# Return a sequence of length `num_steps` starting from `pos`
return corpus[pos:pos + num_steps]
num_batches = num_subseqs // batch_size
for i in range(0, batch_size * num_batches, batch_size):
# Here, `initial_indices` contains randomized starting indices for
# subsequences
initial_indices_per_batch = initial_indices[i:i + batch_size]
X = [data(j) for j in initial_indices_per_batch]
Y = [data(j + 1) for j in initial_indices_per_batch]
yield torch.tensor(X), torch.tensor(Y)
my_seq = list(range(35))
for X, Y in seq_data_iter_random(my_seq, batch_size=2, num_steps=5):
print('X: ', X, '\nY:', Y)
# 8.3.4.2. Sequential Partitioning
def seq_data_iter_sequential(corpus, batch_size, num_steps): #@save
"""Generate a minibatch of subsequences using sequential partitioning."""
# Start with a random offset to partition a sequence
offset = random.randint(0, num_steps)
num_tokens = ((len(corpus) - offset - 1) // batch_size) * batch_size
Xs = torch.tensor(corpus[offset:offset + num_tokens])
Ys = torch.tensor(corpus[offset + 1:offset + 1 + num_tokens])
Xs, Ys = Xs.reshape(batch_size, -1), Ys.reshape(batch_size, -1)
num_batches = Xs.shape[1] // num_steps
for i in range(0, num_steps * num_batches, num_steps):
X = Xs[:, i:i + num_steps]
Y = Ys[:, i:i + num_steps]
yield X, Y
for X, Y in seq_data_iter_sequential(my_seq, batch_size=2, num_steps=5):
print('X: ', X, '\nY:', Y)
class SeqDataLoader: #@save
"""An iterator to load sequence data."""
def __init__(self, batch_size, num_steps, use_random_iter, max_tokens):
if use_random_iter:
self.data_iter_fn = d2l.seq_data_iter_random
else:
self.data_iter_fn = d2l.seq_data_iter_sequential
self.corpus, self.vocab = d2l.load_corpus_time_machine(max_tokens)
self.batch_size, self.num_steps = batch_size, num_steps
def __iter__(self):
return self.data_iter_fn(self.corpus, self.batch_size, self.num_steps)
def load_data_time_machine(batch_size, num_steps, #@save
use_random_iter=False, max_tokens=10000):
"""Return the iterator and the vocabulary of the time machine dataset."""
data_iter = SeqDataLoader(batch_size, num_steps, use_random_iter,
max_tokens)
return data_iter, data_iter.vocab
```
|
github_jupyter
|
## 7-3. HHLアルゴリズムを用いたポートフォリオ最適化
この節では論文[1]を参考に、過去の株価変動のデータから、最適なポートフォリオ(資産配分)を計算してみよう。
ポートフォリオ最適化は、[7-1節](7.1_quantum_phase_estimation_detailed.ipynb)で学んだHHLアルゴリズムを用いることで、従来より高速に解けることが期待されている問題の一つである。
今回は具体的に、GAFA (Google, Apple, Facebook, Amazon) の4社の株式に投資する際、どのような資産配分を行えば最も低いリスクで高いリターンを得られるかという問題を考える。
### 株価データ取得
まずは各社の株価データを取得する。
* GAFA 4社の日次データを用いる
* 株価データ取得のためにpandas_datareaderを用いてYahoo! Financeのデータベースから取得
* 株価はドル建ての調整後終値(Adj. Close)を用いる
```
# データ取得に必要なpandas, pandas_datareaderのインストール
# !pip install pandas pandas_datareader
import numpy as np
import pandas as pd
import pandas_datareader.data as web
import datetime
import matplotlib.pyplot as plt
# 銘柄選択
codes = ['GOOG', 'AAPL', 'FB', 'AMZN'] # GAFA
# 2017年の1年間のデータを使用
start = datetime.datetime(2017, 1, 1)
end = datetime.datetime(2017, 12, 31)
# Yahoo! Financeから日次の株価データを取得
data = web.DataReader(codes, 'yahoo', start, end)
df = data['Adj Close']
## 直近のデータの表示
display(df.tail())
## 株価をプロットしてみる
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15, 6))
df.loc[:,['AAPL', 'FB']].plot(ax=axes[0])
df.loc[:,['GOOG', 'AMZN']].plot(ax=axes[1])
```
※ここで、4つの銘柄を2つのグループに分けているのは、株価の値がそれぞれ近くプロット時に見やすいからであり、深い意味はない。
### データの前処理
次に、取得した株価を日次リターンに変換し、いくつかの統計量を求めておく。
#### 日次リターンへの変換
個別銘柄の日次リターン(変化率) $y_t$ ($t$は日付)は以下で定義される。
$$
y_t = \frac{P_t - P_{t-1}}{P_{t-1}}
$$
これは `pandas DataFrame` の `pct_change()` メソッドで得られる。
```
daily_return = df.pct_change()
display(daily_return.tail())
```
#### 期待リターン
銘柄ごとの期待リターン$\vec R$を求める。ここでは過去のリターンの算術平均を用いる:
$$
\vec R = \frac{1}{T} \sum_{t= 1}^{T} \vec y_t
$$
```
expected_return = daily_return.dropna(how='all').mean() * 252 # 年率換算のため年間の営業日数252を掛ける
print(expected_return)
```
#### 分散・共分散行列
リターンの標本不偏分散・共分散行列$\Sigma$は以下で定義される。
$$
\Sigma = \frac{1}{T-1} \sum_{t=1}^{T} ( \vec y_t -\vec R ) (\vec y_t -\vec R )^T
$$
```
cov = daily_return.dropna(how='all').cov() * 252 # 年率換算のため
display(cov)
```
### ポートフォリオ最適化
準備が整ったところで、ポートフォリオ最適化に取り組もう。
まず、ポートフォリオ(i.e., 資産配分)を4成分のベクトル $\vec{w} = (w_0,w_1,w_2,w_3)^T$ で表す。
これは各銘柄を持つ割合(ウェイト)を表しており、例えば $\vec{w}=(1,0,0,0)$ であれば Google 株に全資産の100%を投入しするポートフォリオを意味する。
以下の式を満たすようなポートフォリオを考えてみよう。
$$
\min_{\vec{w}} \frac{1}{2} \vec{w}^T \Sigma \vec{w} \:\:\: \text{s.t.} \:\: \vec R^T \vec w = \mu , \: \vec 1^T \vec w =1
$$
この式は
* 「ポートフォリオの期待リターン(リターンの平均値)が$\mu$ 」
* 「ポートフォリオに投資するウェイトの合計が1」($\vec 1 = (1,1,1,1)^T$)
という条件の下で、
* 「ポートフォリオのリターンの分散の最小化」
を行うことを意味している。つまり、将来的に $\mu$ だけのリターンを望む時に、なるべくその変動(リスク)を小さくするようなポートフォリオが最善だというわけである。このような問題設定は、[Markowitzの平均分散アプローチ](https://ja.wikipedia.org/wiki/現代ポートフォリオ理論)と呼ばれ、現代の金融工学の基礎となる考えの一つである。
ラグランジュの未定乗数法を用いると、上記の条件を満たす$\vec{w}$は、線形方程式
$$
\begin{gather}
W
\left(
\begin{array}{c}
\eta \\
\theta \\
\vec w
\end{array}
\right)
=
\left(
\begin{array}{c}
\mu \\
1 \\
\vec 0
\end{array}
\right), \tag{1}\\
W =
\left(
\begin{array}{ccc}
0 & 0 & \vec R^T \\
0 & 0 & \vec 1^T \\
\vec{R} &\vec 1 & \Sigma
\end{array}
\right)
\end{gather}
$$
を解くことで得られる事がわかる。
ここで $\eta, \theta$ はラグランジュの未定乗数法のパラメータである。
したがって、最適なポートフォリオ $\vec w$ を求めるためには、連立方程式(1)を $\vec w$ について解けば良いことになる。
これで、ポートフォリオ最適化問題をHHLアルゴリズムが使える線形一次方程式に帰着できた。
#### 行列Wの作成
```
R = expected_return.values
Pi = np.ones(4)
S = cov.values
row1 = np.append(np.zeros(2), R).reshape(1,-1)
row2 = np.append(np.zeros(2), Pi).reshape(1,-1)
row3 = np.concatenate([R.reshape(-1,1), Pi.reshape(-1,1), S], axis=1)
W = np.concatenate([row1, row2, row3])
np.set_printoptions(linewidth=200)
print(W)
## Wの固有値を確認 -> [-pi, pi] に収まっている
print(np.linalg.eigh(W)[0])
```
#### 右辺ベクトルの作成
以下でポートフォリオの期待リターン $\mu$ を指定すると、そのようなリターンをもたらす最もリスクの小さいポートフォリオを計算できる。$\mu$ は自由に設定できる。一般に期待リターンが大きいほどリスクも大きくなるが、ここでは例として10%としておく(GAFA株がガンガン上がっている時期なので、これはかなり弱気な方である)。
```
mu = 0.1 # ポートフォリオのリターン(手で入れるパラメータ)
xi = 1.0
mu_xi_0 = np.append(np.array([mu, xi]), np.zeros_like(R)) ## (1)式の右辺のベクトル
print(mu_xi_0)
```
#### 量子系で扱えるように行列を拡張する
$W$ は6次元なので、3量子ビットあれば量子系で計算可能である ($2^3 = 8$)。
そこで、拡張した2次元分を0で埋めた行列とベクトルも作っておく。
```
nbit = 3 ## 状態に使うビット数
N = 2**nbit
W_enl = np.zeros((N, N)) ## enl は enlarged の略
W_enl[:W.shape[0], :W.shape[1]] = W.copy()
mu_xi_0_enl = np.zeros(N)
mu_xi_0_enl[:len(mu_xi_0)] = mu_xi_0.copy()
```
以上で、連立方程式(1)を解く準備が整った。
### HHLアルゴリズムを用いた最小分散ポートフォリオ算出
それでは、HHL アルゴリズムを用いて、連立一次方程式(1)を解いていこう。
先ずはその下準備として、
* 古典データ $\mathbf{x}$ に応じて、量子状態を $|0\cdots0\rangle \to \sum_i x_i |i \rangle$ と変換する量子回路を返す関数 `input_state_gate` (本来は qRAM の考え方を利用して作るべきだが、シミュレータを使っているので今回は non-unitary なゲートとして実装してしまう。また、規格化は無視している)
* 制御位相ゲートを返す関数 `CPhaseGate`
* 量子フーリエ変換を行うゲートを返す関数 `QFT_gate`
を用意する。
```
# Qulacs のインストール
# !pip install qulacs
## Google Colaboratory / (Linux or Mac)のjupyter notebook 環境の場合にのみ実行してください。
## Qulacsのエラーが正常に出力されるようになります。
!pip3 install wurlitzer
%load_ext wurlitzer
import numpy as np
from qulacs import QuantumCircuit, QuantumState, gate
from qulacs.gate import merge, Identity, H, SWAP
def input_state_gate(start_bit, end_bit, vec):
"""
Making a quantum gate which transform |0> to \sum_i x[i]|i>m where x[i] is input vector.
!!! this uses 2**n times 2**n matrix, so it is quite memory-cosuming.
!!! this gate is not unitary (we assume that the input state is |0>)
Args:
int start_bit: first index of qubit which the gate applies
int end_bit: last index of qubit which the gate applies
np.ndarray vec: input vector.
Returns:
qulacs.QuantumGate
"""
nbit = end_bit - start_bit + 1
assert vec.size == 2**nbit
mat_0tox = np.eye(vec.size, dtype=complex)
mat_0tox[:,0] = vec
return gate.DenseMatrix(np.arange(start_bit, end_bit+1), mat_0tox)
def CPhaseGate(target, control, angle):
"""
Create controlled phase gate diag(1,e^{i*angle}) with controll. (Qulacs.gate is requried)
Args:
int target: index of target qubit.
int control: index of control qubit.
float64 angle: angle of phase gate.
Returns:
QuantumGateBase.DenseMatrix: diag(1, exp(i*angle)).
"""
CPhaseGate = gate.DenseMatrix(target, np.array( [[1,0], [0,np.cos(angle)+1.j*np.sin(angle)]]) )
CPhaseGate.add_control_qubit(control, 1)
return CPhaseGate
def QFT_gate(start_bit, end_bit, Inverse = False):
"""
Making a gate which performs quantum Fourier transfromation between start_bit to end_bit.
(Definition below is the case when start_bit = 0 and end_bit=n-1)
We associate an integer j = j_{n-1}...j_0 to quantum state |j_{n-1}...j_0>.
We define QFT as
|k> = |k_{n-1}...k_0> = 1/sqrt(2^n) sum_{j=0}^{2^n-1} exp(2pi*i*(k/2^n)*j) |j>.
then, |k_m > = 1/sqrt(2)*(|0> + exp(i*2pi*0.j_{n-1-m}...j_0)|1> )
When Inverse=True, the gate represents Inverse QFT,
|k> = |k_{n-1}...k_0> = 1/sqrt(2^n) sum_{j=0}^{2^n-1} exp(-2pi*i*(k/2^n)*j) |j>.
Args:
int start_bit: first index of qubits where we apply QFT.
int end_bit: last index of qubits where we apply QFT.
bool Inverse: When True, the gate perform inverse-QFT ( = QFT^{\dagger}).
Returns:
qulacs.QuantumGate: QFT gate which acts on a region between start_bit and end_bit.
"""
gate = Identity(start_bit) ## make empty gate
n = end_bit - start_bit + 1 ## size of QFT
## loop from j_{n-1}
for target in range(end_bit, start_bit-1, -1):
gate = merge(gate, H(target)) ## 1/sqrt(2)(|0> + exp(i*2pi*0.j_{target})|1>)
for control in range(start_bit, target):
gate = merge( gate, CPhaseGate(target, control, (-1)**Inverse * 2.*np.pi/2**(target-control+1)) )
## perform SWAP between (start_bit + s)-th bit and (end_bit - s)-th bit
for s in range(n//2): ## s runs 0 to n//2-1
gate = merge(gate, SWAP(start_bit + s, end_bit - s))
## return final circuit
return gate
```
まずはHHLアルゴリズムに必要なパラメータを設定する。
クロックレジスタ量子ビット数 `reg_nbit`を `7` とし、行列 $W$ のスケーリングに使う係数 `scale_fac` を`1` とする(つまり、スケールさせない)。
また、制御回転ゲートに使う係数 $c$ は、`reg_nbit` ビットで表せる非ゼロの最も小さい数の半分にとっておく。
```
# 位相推定に使うレジスタの数
reg_nbit = 7
## W_enl をスケールする係数
scale_fac = 1.
W_enl_scaled = scale_fac * W_enl
## W_enl_scaledの固有値として想定する最小の値
## 今回は射影が100%成功するので, レジスタで表せる最小値の定数倍でとっておく
C = 0.5*(2 * np.pi * (1. / 2**(reg_nbit) ))
```
HHLアルゴリズムの核心部分を書いていく。今回は、シミュレータ qulacs を使うので様々な簡略化を行なっている。
HHLアルゴリズムがどのように動作するのかについての感覚を知る実装と思っていただきたい。
* 入力状態 $|\mathbf{b}\rangle$ を用意する部分は簡略化
* 量子位相推定アルゴリズムで使う $e^{iA}$ の部分は、 $A$ を古典計算機で対角化したものを使う
* 逆数をとる制御回転ゲートも、古典的に行列を用意して実装
* 補助ビット $|0 \rangle{}_{S}$ への射影測定を行い、測定結果 `0` が得られた状態のみを扱う
(実装の都合上、制御回転ゲートの作用の定義を[7-1節](7.1_quantum_phase_estimation_detailed.ipynb)と逆にした)
```
from functools import reduce
## 対角化. AP = PD <-> A = P*D*P^dag
D, P = np.linalg.eigh(W_enl_scaled)
#####################################
### HHL量子回路を作る. 0番目のビットから順に、Aの作用する空間のbit達 (0番目 ~ nbit-1番目),
### register bit達 (nbit番目 ~ nbit+reg_nbit-1番目), conditional回転用のbit (nbit+reg_nbit番目)
### とする.
#####################################
total_qubits = nbit + reg_nbit + 1
total_circuit = QuantumCircuit(total_qubits)
## ------ 0番目~(nbit-1)番目のbitに入力するベクトルbの準備 ------
## 本来はqRAMのアルゴリズムを用いるべきだが, ここでは自作の入力ゲートを用いている.
## qulacsではstate.load(b_enl)でも実装可能.
state = QuantumState(total_qubits)
state.set_zero_state()
b_gate = input_state_gate(0, nbit-1, mu_xi_0_enl)
total_circuit.add_gate(b_gate)
## ------- レジスターbit に Hadamard gate をかける -------
for register in range(nbit, nbit+reg_nbit): ## from nbit to nbit+reg_nbit-1
total_circuit.add_H_gate(register)
## ------- 位相推定を実装 -------
## U := e^{i*A*t), その固有値をdiag( {e^{i*2pi*phi_k}}_{k=0, ..., N-1) )とおく.
## Implement \sum_j |j><j| exp(i*A*t*j) to register bits
for register in range(nbit, nbit+reg_nbit):
## U^{2^{register-nbit}} を実装.
## 対角化した結果を使ってしまう
U_mat = reduce(np.dot, [P, np.diag(np.exp( 1.j * D * (2**(register-nbit)) )), P.T.conj()] )
U_gate = gate.DenseMatrix(np.arange(nbit), U_mat)
U_gate.add_control_qubit(register, 1) ## control bitの追加
total_circuit.add_gate(U_gate)
## ------- Perform inverse QFT to register bits -------
total_circuit.add_gate(QFT_gate(nbit, nbit+reg_nbit-1, Inverse=True))
## ------- conditional rotation を掛ける -------
## レジスター |phi> に対応するA*tの固有値は l = 2pi * 0.phi = 2pi * (phi / 2**reg_nbit).
## conditional rotationの定義は (本文と逆)
## |phi>|0> -> C/(lambda)|phi>|0> + sqrt(1 - C^2/(lambda)^2)|phi>|1>.
## 古典シミュレーションなのでゲートをあらわに作ってしまう.
condrot_mat = np.zeros( (2**(reg_nbit+1), (2**(reg_nbit+1))), dtype=complex)
for index in range(2**reg_nbit):
lam = 2 * np.pi * (float(index) / 2**(reg_nbit) )
index_0 = index ## integer which represents |index>|0>
index_1 = index + 2**reg_nbit ## integer which represents |index>|1>
if lam >= C:
if lam >= np.pi: ## あらかじめ[-pi, pi]内に固有値をスケールしているので、[pi, 2pi] は 負の固有値に対応
lam = lam - 2*np.pi
condrot_mat[index_0, index_0] = C / lam
condrot_mat[index_1, index_0] = np.sqrt( 1 - C**2/lam**2 )
condrot_mat[index_0, index_1] = - np.sqrt( 1 - C**2/lam**2 )
condrot_mat[index_1, index_1] = C / lam
else:
condrot_mat[index_0, index_0] = 1.
condrot_mat[index_1, index_1] = 1.
## DenseGateに変換して実装
condrot_gate = gate.DenseMatrix(np.arange(nbit, nbit+reg_nbit+1), condrot_mat)
total_circuit.add_gate(condrot_gate)
## ------- Perform QFT to register bits -------
total_circuit.add_gate(QFT_gate(nbit, nbit+reg_nbit-1, Inverse=False))
## ------- 位相推定の逆を実装(U^\dagger = e^{-iAt}) -------
for register in range(nbit, nbit+reg_nbit): ## from nbit to nbit+reg_nbit-1
## {U^{\dagger}}^{2^{register-nbit}} を実装.
## 対角化した結果を使ってしまう
U_mat = reduce(np.dot, [P, np.diag(np.exp( -1.j* D * (2**(register-nbit)) )), P.T.conj()] )
U_gate = gate.DenseMatrix(np.arange(nbit), U_mat)
U_gate.add_control_qubit(register, 1) ## control bitの追加
total_circuit.add_gate(U_gate)
## ------- レジスターbit に Hadamard gate をかける -------
for register in range(nbit, nbit+reg_nbit):
total_circuit.add_H_gate(register)
## ------- 補助ビットを0に射影する. qulacsでは非ユニタリゲートとして実装されている -------
total_circuit.add_P0_gate(nbit+reg_nbit)
#####################################
### HHL量子回路を実行し, 結果を取り出す
#####################################
total_circuit.update_quantum_state(state)
## 0番目から(nbit-1)番目の bit が計算結果 |x>に対応
result = state.get_vector()[:2**nbit].real
x_HHL = result/C * scale_fac
```
HHL アルゴリズムによる解 `x_HHL` と、通常の古典計算の対角化による解 `x_exact` を比べると、概ね一致していることが分かる。(HHLアルゴリズムの精度を決めるパラメータはいくつかある(例えば`reg_nbit`)ので、それらを変えて色々試してみて頂きたい。)
```
## 厳密解
x_exact = np.linalg.lstsq(W_enl, mu_xi_0_enl, rcond=0)[0]
print("HHL: ", x_HHL)
print("exact:", x_exact)
rel_error = np.linalg.norm(x_HHL- x_exact) / np.linalg.norm(x_exact)
print("rel_error", rel_error)
```
実際のウェイトの部分だけ取り出すと
```
w_opt_HHL = x_HHL[2:6]
w_opt_exact = x_exact[2:6]
w_opt = pd.DataFrame(np.vstack([w_opt_exact, w_opt_HHL]).T, index=df.columns, columns=['exact', 'HHL'])
w_opt
w_opt.plot.bar()
```
※重みが負になっている銘柄は、「空売り」(株を借りてきて売ること。株価が下がる局面で利益が得られる手法)を表す。今回は目標リターンが10%と、GAFA株(単独で30〜40%の期待リターン)にしてはかなり小さい値を設定したため、空売りを行って全体の期待リターンを下げていると思われる。
### Appendix: バックテスト
過去のデータから得られた投資ルールを、それ以降のデータを用いて検証することを「バックテスト」と呼び、その投資ルールの有効性を測るために重要である。
ここでは以上のように2017年のデータから構築したポートフォリオに投資した場合に、翌年の2018年にどの程度資産価値が変化するかを観察する。
```
# 2018年の1年間のデータを使用
start = datetime.datetime(2017, 12, 30)
end = datetime.datetime(2018, 12, 31)
# Yahoo! Financeから日次の株価データを取得
data = web.DataReader(codes, 'yahoo', start, end)
df2018 = data['Adj Close']
display(df2018.tail())
## 株価をプロットしてみる
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15, 6))
df2018.loc[:,['AAPL', 'FB']].plot(ax=axes[0])
df2018.loc[:,['GOOG', 'AMZN']].plot(ax=axes[1])
# ポートフォリオの資産価値の推移
pf_value = df2018.dot(w_opt)
pf_value.head()
# exact と HHLで初期金額が異なることがありうるので、期初の値で規格化したリターンをみる。
pf_value.exact = pf_value.exact / pf_value.exact[0]
pf_value.HHL = pf_value.HHL / pf_value.HHL[0]
print(pf_value.tail())
pf_value.plot(figsize=(9, 6))
```
2018年はAmazon以外のGAFA各社の株式が軟調だったので、およそ-20%もの損が出ているが、exact解の方は多少マシであるようだ。。
ちなみに、元々行ったのはリスク最小化なので、この一年間のリスクも計算してみると、exact解の方が小さい結果となった。
```
pf_value.pct_change().std() * np.sqrt(252) ## 年率換算
```
### 参考文献
[1] P. Rebentrost and S. Lloyd, “Quantum computational finance: quantum algorithm for portfolio optimization“, https://arxiv.org/abs/1811.03975
|
github_jupyter
|
**Interactive mapping and analysis of geospatial big data using geemap and Google Earth Engine**
This notebook was developed for the geemap workshop at the [GeoPython 2021 Conference](https://2021.geopython.net).
Authors: [Qiusheng Wu](https://github.com/giswqs), [Kel Markert](https://github.com/KMarkert)
Link to this notebook: https://gishub.org/geopython
Recorded video: https://www.youtube.com/watch?v=wGjpjh9IQ5I
[](https://www.youtube.com/watch?v=wGjpjh9IQ5I)
## Introduction
### Description
Google Earth Engine (GEE) is a cloud computing platform with a multi-petabyte catalog of satellite imagery and geospatial datasets. It enables scientists, researchers, and developers to analyze and visualize changes on the Earth’s surface. The geemap Python package provides GEE users with an intuitive interface to manipulate, analyze, and visualize geospatial big data interactively in a Jupyter-based environment. The topics will be covered in this workshop include:
1. Introducing geemap and the Earth Engine Python API
2. Creating interactive maps
3. Searching GEE data catalog
4. Displaying GEE datasets
5. Classifying images using machine learning algorithms
6. Computing statistics and exporting results
7. Producing publication-quality maps
8. Building and deploying interactive web apps, among others
This workshop is intended for scientific programmers, data scientists, geospatial analysts, and concerned citizens of Earth. The attendees are expected to have a basic understanding of Python and the Jupyter ecosystem. Familiarity with Earth science and geospatial datasets is useful but not required.
### Useful links
- [GeoPython 2021 Conference website](https://2021.geopython.net)
- [Google Earth Engine](https://earthengine.google.com)
- [geemap.org](https://geemap.org)
- [Google Earth Engine and geemap Python Tutorials](https://www.youtube.com/playlist?list=PLAxJ4-o7ZoPccOFv1dCwvGI6TYnirRTg3) (55 videos with a total length of 15 hours)
- [Spatial Data Management with Google Earth Engine](https://www.youtube.com/playlist?list=PLAxJ4-o7ZoPdz9LHIJIxHlZe3t-MRCn61) (19 videos with a total length of 9 hours)
- [Ask geemap questions on GitHub](https://github.com/giswqs/geemap/discussions)
### Prerequisite
- A Google Earth Engine account. Sign up [here](https://earthengine.google.com) if needed.
- [Miniconda](https://docs.conda.io/en/latest/miniconda.html) or [Anaconda](https://www.anaconda.com/products/individual)
### Set up a conda environment
```
conda create -n geo python=3.8
conda activate geo
conda install geemap -c conda-forge
conda install jupyter_contrib_nbextensions -c conda-forge
jupyter contrib nbextension install --user
```
## geemap basics
### Import libraries
```
import os
import ee
import geemap
```
### Create an interactive map
```
Map = geemap.Map()
Map
```
### Customize the default map
You can specify the center(lat, lon) and zoom for the default map. The lite mode will only show the zoom in/out tool.
```
Map = geemap.Map(center=(40, -100), zoom=4, lite_mode=True)
Map
```
### Add basemaps
```
Map = geemap.Map()
Map.add_basemap('HYBRID')
Map
from geemap.basemaps import basemaps
Map.add_basemap(basemaps.OpenTopoMap)
```
### Change basemaps without coding

```
Map = geemap.Map()
Map
```
### Add WMS and XYZ tile layers
Examples: https://viewer.nationalmap.gov/services/
```
Map = geemap.Map()
url = 'https://mt1.google.com/vt/lyrs=p&x={x}&y={y}&z={z}'
Map.add_tile_layer(url, name='Google Terrain', attribution='Google')
Map
naip_url = 'https://services.nationalmap.gov/arcgis/services/USGSNAIPImagery/ImageServer/WMSServer?'
Map.add_wms_layer(
url=naip_url, layers='0', name='NAIP Imagery', format='image/png', shown=True
)
```
### Use drawing tools
```
Map = geemap.Map()
Map
# Map.user_roi.getInfo()
# Map.user_rois.getInfo()
```
### Convert GEE JavaScript to Python
https://developers.google.com/earth-engine/guides/image_visualization
```
js_snippet = """
// Load an image.
var image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318');
// Define the visualization parameters.
var vizParams = {
bands: ['B5', 'B4', 'B3'],
min: 0,
max: 0.5,
gamma: [0.95, 1.1, 1]
};
// Center the map and display the image.
Map.setCenter(-122.1899, 37.5010, 10); // San Francisco Bay
Map.addLayer(image, vizParams, 'false color composite');
"""
geemap.js_snippet_to_py(
js_snippet, add_new_cell=True, import_ee=True, import_geemap=True, show_map=True
)
```
You can also convert GEE JavaScript to Python without coding.

```
Map = geemap.Map()
Map
```
## Earth Engine datasets
### Load Earth Engine datasets
```
Map = geemap.Map()
# Add Earth Engine datasets
dem = ee.Image('USGS/SRTMGL1_003')
landcover = ee.Image("ESA/GLOBCOVER_L4_200901_200912_V2_3").select('landcover')
landsat7 = ee.Image('LANDSAT/LE7_TOA_5YEAR/1999_2003')
states = ee.FeatureCollection("TIGER/2018/States")
# Set visualization parameters.
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
}
# Add Earth Eninge layers to Map
Map.addLayer(dem, vis_params, 'SRTM DEM', True, 0.5)
Map.addLayer(landcover, {}, 'Land cover')
Map.addLayer(
landsat7,
{'bands': ['B4', 'B3', 'B2'], 'min': 20, 'max': 200, 'gamma': 1.5},
'Landsat 7',
)
Map.addLayer(states, {}, "US States")
Map
```
### Search the Earth Engine Data Catalog
```
Map = geemap.Map()
Map
dem = ee.Image('CGIAR/SRTM90_V4')
Map.addLayer(dem, {}, "CGIAR/SRTM90_V4")
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
}
Map.addLayer(dem, vis_params, "DEM")
```
### Use the datasets module
```
from geemap.datasets import DATA
Map = geemap.Map()
dem = ee.Image(DATA.USGS_SRTMGL1_003)
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
}
Map.addLayer(dem, vis_params, 'SRTM DEM')
Map
```
### Use the Inspector tool

```
Map = geemap.Map()
# Add Earth Engine datasets
dem = ee.Image('USGS/SRTMGL1_003')
landcover = ee.Image("ESA/GLOBCOVER_L4_200901_200912_V2_3").select('landcover')
landsat7 = ee.Image('LANDSAT/LE7_TOA_5YEAR/1999_2003').select(
['B1', 'B2', 'B3', 'B4', 'B5', 'B7']
)
states = ee.FeatureCollection("TIGER/2018/States")
# Set visualization parameters.
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
}
# Add Earth Eninge layers to Map
Map.addLayer(dem, vis_params, 'SRTM DEM', True, 0.5)
Map.addLayer(landcover, {}, 'Land cover')
Map.addLayer(
landsat7,
{'bands': ['B4', 'B3', 'B2'], 'min': 20, 'max': 200, 'gamma': 1.5},
'Landsat 7',
)
Map.addLayer(states, {}, "US States")
Map
```
## Data visualization
### Use the Plotting tool

```
Map = geemap.Map()
landsat7 = ee.Image('LANDSAT/LE7_TOA_5YEAR/1999_2003').select(
['B1', 'B2', 'B3', 'B4', 'B5', 'B7']
)
landsat_vis = {'bands': ['B4', 'B3', 'B2'], 'gamma': 1.4}
Map.addLayer(landsat7, landsat_vis, "Landsat")
hyperion = ee.ImageCollection('EO1/HYPERION').filter(
ee.Filter.date('2016-01-01', '2017-03-01')
)
hyperion_vis = {
'min': 1000.0,
'max': 14000.0,
'gamma': 2.5,
}
Map.addLayer(hyperion, hyperion_vis, 'Hyperion')
Map
```
### Change layer opacity
```
Map = geemap.Map(center=(40, -100), zoom=4)
dem = ee.Image('USGS/SRTMGL1_003')
states = ee.FeatureCollection("TIGER/2018/States")
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
}
Map.addLayer(dem, vis_params, 'SRTM DEM', True, 1)
Map.addLayer(states, {}, "US States", True)
Map
```
### Visualize raster data
```
Map = geemap.Map(center=(40, -100), zoom=4)
# Add Earth Engine dataset
dem = ee.Image('USGS/SRTMGL1_003')
landsat7 = ee.Image('LANDSAT/LE7_TOA_5YEAR/1999_2003').select(
['B1', 'B2', 'B3', 'B4', 'B5', 'B7']
)
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
}
Map.addLayer(dem, vis_params, 'SRTM DEM', True, 1)
Map.addLayer(
landsat7,
{'bands': ['B4', 'B3', 'B2'], 'min': 20, 'max': 200, 'gamma': 2},
'Landsat 7',
)
Map
```
### Visualize vector data
```
Map = geemap.Map()
states = ee.FeatureCollection("TIGER/2018/States")
Map.addLayer(states, {}, "US States")
Map
vis_params = {
'color': '000000',
'colorOpacity': 1,
'pointSize': 3,
'pointShape': 'circle',
'width': 2,
'lineType': 'solid',
'fillColorOpacity': 0.66,
}
palette = ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']
Map.add_styled_vector(
states, column="NAME", palette=palette, layer_name="Styled vector", **vis_params
)
```
### Add a legend
```
legends = geemap.builtin_legends
for legend in legends:
print(legend)
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(builtin_legend='NLCD')
Map
Map = geemap.Map()
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8',
}
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(legend_title="NLCD Land Cover Classification", legend_dict=legend_dict)
Map
```
### Add a colorbar
```
Map = geemap.Map()
dem = ee.Image('USGS/SRTMGL1_003')
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
}
Map.addLayer(dem, vis_params, 'SRTM DEM')
colors = vis_params['palette']
vmin = vis_params['min']
vmax = vis_params['max']
Map.add_colorbar(vis_params, label="Elevation (m)", layer_name="SRTM DEM")
Map
Map.add_colorbar(
vis_params, label="Elevation (m)", layer_name="SRTM DEM", orientation="vertical"
)
Map.add_colorbar(
vis_params,
label="Elevation (m)",
layer_name="SRTM DEM",
orientation="vertical",
transparent_bg=True,
)
Map.add_colorbar(
vis_params,
label="Elevation (m)",
layer_name="SRTM DEM",
orientation="vertical",
transparent_bg=True,
discrete=True,
)
```
### Create a split-panel map
```
Map = geemap.Map()
Map.split_map(left_layer='HYBRID', right_layer='TERRAIN')
Map
Map = geemap.Map()
Map.split_map(
left_layer='NLCD 2016 CONUS Land Cover', right_layer='NLCD 2001 CONUS Land Cover'
)
Map
nlcd_2001 = ee.Image('USGS/NLCD/NLCD2001').select('landcover')
nlcd_2016 = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
left_layer = geemap.ee_tile_layer(nlcd_2001, {}, 'NLCD 2001')
right_layer = geemap.ee_tile_layer(nlcd_2016, {}, 'NLCD 2016')
Map = geemap.Map()
Map.split_map(left_layer, right_layer)
Map
```
### Create linked maps
```
image = (
ee.ImageCollection('COPERNICUS/S2')
.filterDate('2018-09-01', '2018-09-30')
.map(lambda img: img.divide(10000))
.median()
)
vis_params = [
{'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3, 'gamma': 1.3},
{'bands': ['B8', 'B11', 'B4'], 'min': 0, 'max': 0.3, 'gamma': 1.3},
{'bands': ['B8', 'B4', 'B3'], 'min': 0, 'max': 0.3, 'gamma': 1.3},
{'bands': ['B12', 'B12', 'B4'], 'min': 0, 'max': 0.3, 'gamma': 1.3},
]
labels = [
'Natural Color (B4/B3/B2)',
'Land/Water (B8/B11/B4)',
'Color Infrared (B8/B4/B3)',
'Vegetation (B12/B11/B4)',
]
geemap.linked_maps(
rows=2,
cols=2,
height="400px",
center=[38.4151, 21.2712],
zoom=12,
ee_objects=[image],
vis_params=vis_params,
labels=labels,
label_position="topright",
)
```
### Create timelapse animations
```
geemap.show_youtube('https://youtu.be/mA21Us_3m28')
```
### Create time-series composites
```
geemap.show_youtube('https://youtu.be/kEltQkNia6o')
```
## Data analysis
### Descriptive statistics
```
Map = geemap.Map()
centroid = ee.Geometry.Point([-122.4439, 37.7538])
image = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR').filterBounds(centroid).first()
vis = {'min': 0, 'max': 3000, 'bands': ['B5', 'B4', 'B3']}
Map.centerObject(centroid, 8)
Map.addLayer(image, vis, "Landsat-8")
Map
image.propertyNames().getInfo()
image.get('CLOUD_COVER').getInfo()
props = geemap.image_props(image)
props.getInfo()
stats = geemap.image_stats(image, scale=90)
stats.getInfo()
```
### Zonal statistics
```
Map = geemap.Map()
# Add Earth Engine dataset
dem = ee.Image('USGS/SRTMGL1_003')
# Set visualization parameters.
dem_vis = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
}
# Add Earth Engine DEM to map
Map.addLayer(dem, dem_vis, 'SRTM DEM')
# Add Landsat data to map
landsat = ee.Image('LANDSAT/LE7_TOA_5YEAR/1999_2003')
landsat_vis = {'bands': ['B4', 'B3', 'B2'], 'gamma': 1.4}
Map.addLayer(landsat, landsat_vis, "LE7_TOA_5YEAR/1999_2003")
states = ee.FeatureCollection("TIGER/2018/States")
Map.addLayer(states, {}, 'US States')
Map
out_dir = os.path.expanduser('~/Downloads')
out_dem_stats = os.path.join(out_dir, 'dem_stats.csv')
if not os.path.exists(out_dir):
os.makedirs(out_dir)
# Allowed output formats: csv, shp, json, kml, kmz
# Allowed statistics type: MEAN, MAXIMUM, MINIMUM, MEDIAN, STD, MIN_MAX, VARIANCE, SUM
geemap.zonal_statistics(dem, states, out_dem_stats, statistics_type='MEAN', scale=1000)
out_landsat_stats = os.path.join(out_dir, 'landsat_stats.csv')
geemap.zonal_statistics(
landsat, states, out_landsat_stats, statistics_type='SUM', scale=1000
)
```
### Zonal statistics by group
```
Map = geemap.Map()
dataset = ee.Image('USGS/NLCD/NLCD2016')
landcover = ee.Image(dataset.select('landcover'))
Map.addLayer(landcover, {}, 'NLCD 2016')
states = ee.FeatureCollection("TIGER/2018/States")
Map.addLayer(states, {}, 'US States')
Map.add_legend(builtin_legend='NLCD')
Map
out_dir = os.path.expanduser('~/Downloads')
nlcd_stats = os.path.join(out_dir, 'nlcd_stats.csv')
if not os.path.exists(out_dir):
os.makedirs(out_dir)
# statistics_type can be either 'SUM' or 'PERCENTAGE'
# denominator can be used to convert square meters to other areal units, such as square kilimeters
geemap.zonal_statistics_by_group(
landcover,
states,
nlcd_stats,
statistics_type='SUM',
denominator=1000000,
decimal_places=2,
)
```
### Unsupervised classification
Source: https://developers.google.com/earth-engine/guides/clustering
The `ee.Clusterer` package handles unsupervised classification (or clustering) in Earth Engine. These algorithms are currently based on the algorithms with the same name in [Weka](http://www.cs.waikato.ac.nz/ml/weka/). More details about each Clusterer are available in the reference docs in the Code Editor.
Clusterers are used in the same manner as classifiers in Earth Engine. The general workflow for clustering is:
1. Assemble features with numeric properties in which to find clusters.
2. Instantiate a clusterer. Set its parameters if necessary.
3. Train the clusterer using the training data.
4. Apply the clusterer to an image or feature collection.
5. Label the clusters.
The training data is a `FeatureCollection` with properties that will be input to the clusterer. Unlike classifiers, there is no input class value for an `Clusterer`. Like classifiers, the data for the train and apply steps are expected to have the same number of values. When a trained clusterer is applied to an image or table, it assigns an integer cluster ID to each pixel or feature.
Here is a simple example of building and using an ee.Clusterer:

**Add data to the map**
```
Map = geemap.Map()
point = ee.Geometry.Point([-87.7719, 41.8799])
image = (
ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')
.filterBounds(point)
.filterDate('2019-01-01', '2019-12-31')
.sort('CLOUD_COVER')
.first()
.select('B[1-7]')
)
vis_params = {'min': 0, 'max': 3000, 'bands': ['B5', 'B4', 'B3']}
Map.centerObject(point, 8)
Map.addLayer(image, vis_params, "Landsat-8")
Map
```
**Make training dataset**
There are several ways you can create a region for generating the training dataset.
- Draw a shape (e.g., rectangle) on the map and the use `region = Map.user_roi`
- Define a geometry, such as `region = ee.Geometry.Rectangle([-122.6003, 37.4831, -121.8036, 37.8288])`
- Create a buffer zone around a point, such as `region = ee.Geometry.Point([-122.4439, 37.7538]).buffer(10000)`
- If you don't define a region, it will use the image footprint by default
```
training = image.sample(
**{
# 'region': region,
'scale': 30,
'numPixels': 5000,
'seed': 0,
'geometries': True, # Set this to False to ignore geometries
}
)
Map.addLayer(training, {}, 'training', False)
```
**Train the clusterer**
```
# Instantiate the clusterer and train it.
n_clusters = 5
clusterer = ee.Clusterer.wekaKMeans(n_clusters).train(training)
```
**Classify the image**
```
# Cluster the input using the trained clusterer.
result = image.cluster(clusterer)
# # Display the clusters with random colors.
Map.addLayer(result.randomVisualizer(), {}, 'clusters')
Map
```
**Label the clusters**
```
legend_keys = ['One', 'Two', 'Three', 'Four', 'ect']
legend_colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# Reclassify the map
result = result.remap([0, 1, 2, 3, 4], [1, 2, 3, 4, 5])
Map.addLayer(
result, {'min': 1, 'max': 5, 'palette': legend_colors}, 'Labelled clusters'
)
Map.add_legend(
legend_keys=legend_keys, legend_colors=legend_colors, position='bottomright'
)
```
**Visualize the result**
```
print('Change layer opacity:')
cluster_layer = Map.layers[-1]
cluster_layer.interact(opacity=(0, 1, 0.1))
Map
```
**Export the result**
```
out_dir = os.path.expanduser('~/Downloads')
out_file = os.path.join(out_dir, 'cluster.tif')
geemap.ee_export_image(result, filename=out_file, scale=90)
# geemap.ee_export_image_to_drive(result, description='clusters', folder='export', scale=90)
```
### Supervised classification
Source: https://developers.google.com/earth-engine/guides/classification
The `Classifier` package handles supervised classification by traditional ML algorithms running in Earth Engine. These classifiers include CART, RandomForest, NaiveBayes and SVM. The general workflow for classification is:
1. Collect training data. Assemble features which have a property that stores the known class label and properties storing numeric values for the predictors.
2. Instantiate a classifier. Set its parameters if necessary.
3. Train the classifier using the training data.
4. Classify an image or feature collection.
5. Estimate classification error with independent validation data.
The training data is a `FeatureCollection` with a property storing the class label and properties storing predictor variables. Class labels should be consecutive, integers starting from 0. If necessary, use remap() to convert class values to consecutive integers. The predictors should be numeric.

**Add data to the map**
```
Map = geemap.Map()
point = ee.Geometry.Point([-122.4439, 37.7538])
image = (
ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')
.filterBounds(point)
.filterDate('2016-01-01', '2016-12-31')
.sort('CLOUD_COVER')
.first()
.select('B[1-7]')
)
vis_params = {'min': 0, 'max': 3000, 'bands': ['B5', 'B4', 'B3']}
Map.centerObject(point, 8)
Map.addLayer(image, vis_params, "Landsat-8")
Map
```
**Make training dataset**
There are several ways you can create a region for generating the training dataset.
- Draw a shape (e.g., rectangle) on the map and the use `region = Map.user_roi`
- Define a geometry, such as `region = ee.Geometry.Rectangle([-122.6003, 37.4831, -121.8036, 37.8288])`
- Create a buffer zone around a point, such as `region = ee.Geometry.Point([-122.4439, 37.7538]).buffer(10000)`
- If you don't define a region, it will use the image footprint by default
```
# region = Map.user_roi
# region = ee.Geometry.Rectangle([-122.6003, 37.4831, -121.8036, 37.8288])
# region = ee.Geometry.Point([-122.4439, 37.7538]).buffer(10000)
```
In this example, we are going to use the [USGS National Land Cover Database (NLCD)](https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD) to create label dataset for training

```
nlcd = ee.Image('USGS/NLCD/NLCD2016').select('landcover').clip(image.geometry())
Map.addLayer(nlcd, {}, 'NLCD')
Map
# Make the training dataset.
points = nlcd.sample(
**{
'region': image.geometry(),
'scale': 30,
'numPixels': 5000,
'seed': 0,
'geometries': True, # Set this to False to ignore geometries
}
)
Map.addLayer(points, {}, 'training', False)
```
**Train the classifier**
```
# Use these bands for prediction.
bands = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7']
# This property of the table stores the land cover labels.
label = 'landcover'
# Overlay the points on the imagery to get training.
training = image.select(bands).sampleRegions(
**{'collection': points, 'properties': [label], 'scale': 30}
)
# Train a CART classifier with default parameters.
trained = ee.Classifier.smileCart().train(training, label, bands)
```
**Classify the image**
```
# Classify the image with the same bands used for training.
result = image.select(bands).classify(trained)
# # Display the clusters with random colors.
Map.addLayer(result.randomVisualizer(), {}, 'classfied')
Map
```
**Render categorical map**
To render a categorical map, we can set two image properties: `landcover_class_values` and `landcover_class_palette`. We can use the same style as the NLCD so that it is easy to compare the two maps.
```
class_values = nlcd.get('landcover_class_values').getInfo()
class_palette = nlcd.get('landcover_class_palette').getInfo()
landcover = result.set('classification_class_values', class_values)
landcover = landcover.set('classification_class_palette', class_palette)
Map.addLayer(landcover, {}, 'Land cover')
Map.add_legend(builtin_legend='NLCD')
Map
```
**Visualize the result**
```
print('Change layer opacity:')
cluster_layer = Map.layers[-1]
cluster_layer.interact(opacity=(0, 1, 0.1))
```
**Export the result**
```
out_dir = os.path.expanduser('~/Downloads')
out_file = os.path.join(out_dir, 'landcover.tif')
geemap.ee_export_image(landcover, filename=out_file, scale=900)
# geemap.ee_export_image_to_drive(landcover, description='landcover', folder='export', scale=900)
```
### Training sample creation

```
geemap.show_youtube('https://youtu.be/VWh5PxXPZw0')
Map = geemap.Map()
Map
```
### WhiteboxTools
```
import whiteboxgui
whiteboxgui.show()
whiteboxgui.show(tree=True)
```

```
Map = geemap.Map()
Map
```
## Map making
### Plot a single band image
```
import matplotlib.pyplot as plt
from geemap import cartoee
geemap.ee_initialize()
srtm = ee.Image("CGIAR/SRTM90_V4")
region = [-180, -60, 180, 85] # define bounding box to request data
vis = {'min': 0, 'max': 3000} # define visualization parameters for image
fig = plt.figure(figsize=(15, 10))
cmap = "gist_earth" # colormap we want to use
# cmap = "terrain"
# use cartoee to get a map
ax = cartoee.get_map(srtm, region=region, vis_params=vis, cmap=cmap)
# add a colorbar to the map using the visualization params we passed to the map
cartoee.add_colorbar(
ax, vis, cmap=cmap, loc="right", label="Elevation", orientation="vertical"
)
# add gridlines to the map at a specified interval
cartoee.add_gridlines(ax, interval=[60, 30], linestyle="--")
# add coastlines using the cartopy api
ax.coastlines(color="red")
ax.set_title(label='Global Elevation Map', fontsize=15)
plt.show()
```
### Plot an RGB image
```
# get a landsat image to visualize
image = ee.Image('LANDSAT/LC08/C01/T1_SR/LC08_044034_20140318')
# define the visualization parameters to view
vis = {"bands": ['B5', 'B4', 'B3'], "min": 0, "max": 5000, "gamma": 1.3}
fig = plt.figure(figsize=(15, 10))
# here is the bounding box of the map extent we want to use
# formatted a [W,S,E,N]
zoom_region = [-122.6265, 37.3458, -121.8025, 37.9178]
# plot the map over the region of interest
ax = cartoee.get_map(image, vis_params=vis, region=zoom_region)
# add the gridlines and specify that the xtick labels be rotated 45 degrees
cartoee.add_gridlines(ax, interval=0.15, xtick_rotation=45, linestyle=":")
# add coastline
ax.coastlines(color="yellow")
# add north arrow
cartoee.add_north_arrow(
ax, text="N", xy=(0.05, 0.25), text_color="white", arrow_color="white", fontsize=20
)
# add scale bar
cartoee.add_scale_bar_lite(
ax, length=10, xy=(0.1, 0.05), fontsize=20, color="white", unit="km"
)
ax.set_title(label='Landsat False Color Composite (Band 5/4/3)', fontsize=15)
plt.show()
```
### Add map elements
```
from matplotlib.lines import Line2D
# get a landsat image to visualize
image = ee.Image('LANDSAT/LC08/C01/T1_SR/LC08_044034_20140318')
# define the visualization parameters to view
vis = {"bands": ['B5', 'B4', 'B3'], "min": 0, "max": 5000, "gamma": 1.3}
fig = plt.figure(figsize=(15, 10))
# here is the bounding box of the map extent we want to use
# formatted a [W,S,E,N]
zoom_region = [-122.6265, 37.3458, -121.8025, 37.9178]
# plot the map over the region of interest
ax = cartoee.get_map(image, vis_params=vis, region=zoom_region)
# add the gridlines and specify that the xtick labels be rotated 45 degrees
cartoee.add_gridlines(ax, interval=0.15, xtick_rotation=0, linestyle=":")
# add coastline
ax.coastlines(color="cyan")
# add north arrow
cartoee.add_north_arrow(
ax, text="N", xy=(0.05, 0.25), text_color="white", arrow_color="white", fontsize=20
)
# add scale bar
cartoee.add_scale_bar_lite(
ax, length=10, xy=(0.1, 0.05), fontsize=20, color="white", unit="km"
)
ax.set_title(label='Landsat False Color Composite (Band 5/4/3)', fontsize=15)
# add legend
legend_elements = [
Line2D([], [], color='#00ffff', lw=2, label='Coastline'),
Line2D(
[],
[],
marker='o',
color='#A8321D',
label='City',
markerfacecolor='#A8321D',
markersize=10,
ls='',
),
]
cartoee.add_legend(ax, legend_elements, loc='lower right')
plt.show()
```
### Plot multiple layers
```
Map = geemap.Map()
image = (
ee.ImageCollection('MODIS/MCD43A4_006_NDVI')
.filter(ee.Filter.date('2018-04-01', '2018-05-01'))
.select("NDVI")
.first()
)
vis_params = {
'min': 0.0,
'max': 1.0,
'palette': [
'FFFFFF',
'CE7E45',
'DF923D',
'F1B555',
'FCD163',
'99B718',
'74A901',
'66A000',
'529400',
'3E8601',
'207401',
'056201',
'004C00',
'023B01',
'012E01',
'011D01',
'011301',
],
}
Map.setCenter(-7.03125, 31.0529339857, 2)
Map.addLayer(image, vis_params, 'MODIS NDVI')
countries = geemap.shp_to_ee("../data/countries.shp")
style = {"color": "00000088", "width": 1, "fillColor": "00000000"}
Map.addLayer(countries.style(**style), {}, "Countries")
ndvi = image.visualize(**vis_params)
blend = ndvi.blend(countries.style(**style))
Map.addLayer(blend, {}, "Blend")
Map
# specify region to focus on
bbox = [-180, -88, 180, 88]
fig = plt.figure(figsize=(15, 10))
# plot the result with cartoee using a PlateCarre projection (default)
ax = cartoee.get_map(blend, region=bbox)
cb = cartoee.add_colorbar(ax, vis_params=vis_params, loc='right')
ax.set_title(label='MODIS NDVI', fontsize=15)
# ax.coastlines()
plt.show()
import cartopy.crs as ccrs
fig = plt.figure(figsize=(15, 10))
projection = ccrs.EqualEarth(central_longitude=-180)
# plot the result with cartoee using a PlateCarre projection (default)
ax = cartoee.get_map(blend, region=bbox, proj=projection)
cb = cartoee.add_colorbar(ax, vis_params=vis_params, loc='right')
ax.set_title(label='MODIS NDVI', fontsize=15)
# ax.coastlines()
plt.show()
```
### Use custom projections
```
import cartopy.crs as ccrs
# get an earth engine image of ocean data for Jan-Mar 2018
ocean = (
ee.ImageCollection('NASA/OCEANDATA/MODIS-Terra/L3SMI')
.filter(ee.Filter.date('2018-01-01', '2018-03-01'))
.median()
.select(["sst"], ["SST"])
)
# set parameters for plotting
# will plot the Sea Surface Temp with specific range and colormap
visualization = {'bands': "SST", 'min': -2, 'max': 30}
# specify region to focus on
bbox = [-180, -88, 180, 88]
fig = plt.figure(figsize=(15, 10))
# plot the result with cartoee using a PlateCarre projection (default)
ax = cartoee.get_map(ocean, cmap='plasma', vis_params=visualization, region=bbox)
cb = cartoee.add_colorbar(ax, vis_params=visualization, loc='right', cmap='plasma')
ax.set_title(label='Sea Surface Temperature', fontsize=15)
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15, 10))
# create a new Mollweide projection centered on the Pacific
projection = ccrs.Mollweide(central_longitude=-180)
# plot the result with cartoee using the Mollweide projection
ax = cartoee.get_map(
ocean, vis_params=visualization, region=bbox, cmap='plasma', proj=projection
)
cb = cartoee.add_colorbar(
ax, vis_params=visualization, loc='bottom', cmap='plasma', orientation='horizontal'
)
ax.set_title("Mollweide projection")
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15, 10))
# create a new Robinson projection centered on the Pacific
projection = ccrs.Robinson(central_longitude=-180)
# plot the result with cartoee using the Goode homolosine projection
ax = cartoee.get_map(
ocean, vis_params=visualization, region=bbox, cmap='plasma', proj=projection
)
cb = cartoee.add_colorbar(
ax, vis_params=visualization, loc='bottom', cmap='plasma', orientation='horizontal'
)
ax.set_title("Robinson projection")
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15, 10))
# create a new equal Earth projection focused on the Pacific
projection = ccrs.EqualEarth(central_longitude=-180)
# plot the result with cartoee using the orographic projection
ax = cartoee.get_map(
ocean, vis_params=visualization, region=bbox, cmap='plasma', proj=projection
)
cb = cartoee.add_colorbar(
ax, vis_params=visualization, loc='right', cmap='plasma', orientation='vertical'
)
ax.set_title("Equal Earth projection")
ax.coastlines()
plt.show()
fig = plt.figure(figsize=(15, 10))
# create a new orographic projection focused on the Pacific
projection = ccrs.Orthographic(-130, -10)
# plot the result with cartoee using the orographic projection
ax = cartoee.get_map(
ocean, vis_params=visualization, region=bbox, cmap='plasma', proj=projection
)
cb = cartoee.add_colorbar(
ax, vis_params=visualization, loc='right', cmap='plasma', orientation='vertical'
)
ax.set_title("Orographic projection")
ax.coastlines()
plt.show()
```
### Create timelapse animations
```
Map = geemap.Map()
lon = -115.1585
lat = 36.1500
start_year = 1984
end_year = 2000
point = ee.Geometry.Point(lon, lat)
years = ee.List.sequence(start_year, end_year)
def get_best_image(year):
start_date = ee.Date.fromYMD(year, 1, 1)
end_date = ee.Date.fromYMD(year, 12, 31)
image = (
ee.ImageCollection("LANDSAT/LT05/C01/T1_SR")
.filterBounds(point)
.filterDate(start_date, end_date)
.sort("CLOUD_COVER")
.first()
)
return ee.Image(image)
collection = ee.ImageCollection(years.map(get_best_image))
vis_params = {"bands": ['B4', 'B3', 'B2'], "min": 0, "max": 5000}
image = ee.Image(collection.first())
Map.addLayer(image, vis_params, 'First image')
Map.setCenter(lon, lat, 8)
Map
w = 0.4
h = 0.3
region = [lon - w, lat - h, lon + w, lat + h]
fig = plt.figure(figsize=(10, 8))
# use cartoee to get a map
ax = cartoee.get_map(image, region=region, vis_params=vis_params)
# add gridlines to the map at a specified interval
cartoee.add_gridlines(ax, interval=[0.2, 0.2], linestyle=":")
# add north arrow
north_arrow_dict = {
"text": "N",
"xy": (0.1, 0.3),
"arrow_length": 0.15,
"text_color": "white",
"arrow_color": "white",
"fontsize": 20,
"width": 5,
"headwidth": 15,
"ha": "center",
"va": "center",
}
cartoee.add_north_arrow(ax, **north_arrow_dict)
# add scale bar
scale_bar_dict = {
"length": 10,
"xy": (0.1, 0.05),
"linewidth": 3,
"fontsize": 20,
"color": "white",
"unit": "km",
"ha": "center",
"va": "bottom",
}
cartoee.add_scale_bar_lite(ax, **scale_bar_dict)
ax.set_title(label='Las Vegas, NV', fontsize=15)
plt.show()
cartoee.get_image_collection_gif(
ee_ic=collection,
out_dir=os.path.expanduser("~/Downloads/timelapse"),
out_gif="animation.gif",
vis_params=vis_params,
region=region,
fps=5,
mp4=True,
grid_interval=(0.2, 0.2),
plot_title="Las Vegas, NV",
date_format='YYYY-MM-dd',
fig_size=(10, 8),
dpi_plot=100,
file_format="png",
north_arrow_dict=north_arrow_dict,
scale_bar_dict=scale_bar_dict,
verbose=True,
)
```
## Data export
### Export ee.Image
```
Map = geemap.Map()
image = ee.Image('LANDSAT/LE7_TOA_5YEAR/1999_2003')
landsat_vis = {'bands': ['B4', 'B3', 'B2'], 'gamma': 1.4}
Map.addLayer(image, landsat_vis, "LE7_TOA_5YEAR/1999_2003", True, 1)
Map
# Draw any shapes on the map using the Drawing tools before executing this code block
roi = Map.user_roi
if roi is None:
roi = ee.Geometry.Polygon(
[
[
[-115.413031, 35.889467],
[-115.413031, 36.543157],
[-114.034328, 36.543157],
[-114.034328, 35.889467],
[-115.413031, 35.889467],
]
]
)
# Set output directory
out_dir = os.path.expanduser('~/Downloads')
if not os.path.exists(out_dir):
os.makedirs(out_dir)
filename = os.path.join(out_dir, 'landsat.tif')
```
Exporting all bands as one single image
```
image = image.clip(roi).unmask()
geemap.ee_export_image(
image, filename=filename, scale=90, region=roi, file_per_band=False
)
```
Exporting each band as one image
```
geemap.ee_export_image(
image, filename=filename, scale=90, region=roi, file_per_band=True
)
```
Export an image to Google Drive¶
```
# geemap.ee_export_image_to_drive(image, description='landsat', folder='export', region=roi, scale=30)
```
### Export ee.ImageCollection
```
loc = ee.Geometry.Point(-99.2222, 46.7816)
collection = (
ee.ImageCollection('USDA/NAIP/DOQQ')
.filterBounds(loc)
.filterDate('2008-01-01', '2020-01-01')
.filter(ee.Filter.listContains("system:band_names", "N"))
)
collection.aggregate_array('system:index').getInfo()
geemap.ee_export_image_collection(collection, out_dir=out_dir)
# geemap.ee_export_image_collection_to_drive(collection, folder='export', scale=10)
```
### Extract pixels as a numpy array
```
import matplotlib.pyplot as plt
img = ee.Image('LANDSAT/LC08/C01/T1_SR/LC08_038029_20180810').select(['B4', 'B5', 'B6'])
aoi = ee.Geometry.Polygon(
[[[-110.8, 44.7], [-110.8, 44.6], [-110.6, 44.6], [-110.6, 44.7]]], None, False
)
rgb_img = geemap.ee_to_numpy(img, region=aoi)
print(rgb_img.shape)
rgb_img_test = (255 * ((rgb_img[:, :, 0:3] - 100) / 3500)).astype('uint8')
plt.imshow(rgb_img_test)
plt.show()
```
### Export pixel values to points
```
Map = geemap.Map()
# Add Earth Engine dataset
dem = ee.Image('USGS/SRTMGL1_003')
landsat7 = ee.Image('LANDSAT/LE7_TOA_5YEAR/1999_2003')
# Set visualization parameters.
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
}
# Add Earth Eninge layers to Map
Map.addLayer(
landsat7, {'bands': ['B4', 'B3', 'B2'], 'min': 20, 'max': 200}, 'Landsat 7'
)
Map.addLayer(dem, vis_params, 'SRTM DEM', True, 1)
Map
```
**Download sample data**
```
work_dir = os.path.expanduser('~/Downloads')
in_shp = os.path.join(work_dir, 'us_cities.shp')
if not os.path.exists(in_shp):
data_url = 'https://github.com/giswqs/data/raw/main/us/us_cities.zip'
geemap.download_from_url(data_url, out_dir=work_dir)
in_fc = geemap.shp_to_ee(in_shp)
Map.addLayer(in_fc, {}, 'Cities')
```
**Export pixel values as a shapefile**
```
out_shp = os.path.join(work_dir, 'dem.shp')
geemap.extract_values_to_points(in_fc, dem, out_shp)
```
**Export pixel values as a csv**
```
out_csv = os.path.join(work_dir, 'landsat.csv')
geemap.extract_values_to_points(in_fc, landsat7, out_csv)
```
### Export ee.FeatureCollection
```
Map = geemap.Map()
fc = ee.FeatureCollection('users/giswqs/public/countries')
Map.addLayer(fc, {}, "Countries")
Map
out_dir = os.path.expanduser('~/Downloads')
out_shp = os.path.join(out_dir, 'countries.shp')
geemap.ee_to_shp(fc, filename=out_shp)
out_csv = os.path.join(out_dir, 'countries.csv')
geemap.ee_export_vector(fc, filename=out_csv)
out_kml = os.path.join(out_dir, 'countries.kml')
geemap.ee_export_vector(fc, filename=out_kml)
# geemap.ee_export_vector_to_drive(fc, description="countries", folder="export", file_format="shp")
```
## Web apps
### Deploy web apps using ngrok
**Steps to deploy an Earth Engine App:**
1. Install ngrok by following the [instruction](https://ngrok.com/download)
3. Download the notebook [71_timelapse.ipynb](https://geemap.org/notebooks/71_timelapse/71_timelapse.ipynb)
4. Run this from the command line: `voila --no-browser 71_timelapse.ipynb`
5. Run this from the command line: `ngrok http 8866`
6. Copy the link from the ngrok terminal window. The links looks like the following: https://randomstring.ngrok.io
7. Share the link with anyone.
**Optional steps:**
* To show code cells from you app, run this from the command line: `voila --no-browser --strip_sources=False 71_timelapse.ipynb`
* To protect your app with a password, run this: `ngrok http -auth="username:password" 8866`
* To run python simple http server in the directory, run this:`sudo python -m http.server 80`
```
geemap.show_youtube("https://youtu.be/eRDZBVJcNCk")
```
### Deploy web apps using Heroku
**Steps to deploy an Earth Engine App:**
- [Sign up](https://signup.heroku.com/) for a free heroku account.
- Follow the [instructions](https://devcenter.heroku.com/articles/getting-started-with-python#set-up) to install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and Heroku Command Line Interface (CLI).
- Authenticate heroku using the `heroku login` command.
- Clone this repository: <https://github.com/giswqs/geemap-heroku>
- Create your own Earth Engine notebook and put it under the `notebooks` directory.
- Add Python dependencies in the `requirements.txt` file if needed.
- Edit the `Procfile` file by replacing `notebooks/geemap.ipynb` with the path to your own notebook.
- Commit changes to the repository by using `git add . && git commit -am "message"`.
- Create a heroku app: `heroku create`
- Run the `config_vars.py` script to extract Earth Engine token from your computer and set it as an environment variable on heroku: `python config_vars.py`
- Deploy your code to heroku: `git push heroku master`
- Open your heroku app: `heroku open`
**Optional steps:**
- To specify a name for your app, use `heroku apps:create example`
- To preview your app locally, use `heroku local web`
- To hide code cells from your app, you can edit the `Procfile` file and set `--strip_sources=True`
- To periodically check for idle kernels, you can edit the `Procfile` file and set `--MappingKernelManager.cull_interval=60 --MappingKernelManager.cull_idle_timeout=120`
- To view information about your running app, use `heroku logs --tail`
- To set an environment variable on heroku, use `heroku config:set NAME=VALUE`
- To view environment variables for your app, use `heroku config`
```
geemap.show_youtube("https://youtu.be/nsIjfD83ggA")
```
|
github_jupyter
|
```
import pandas as pd
data = pd.read_csv('./data.csv')
X,y = data.drop('target',axis=1),data['target']
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.25)
import torch
import torch.nn as nn
import numpy as np
X_train = torch.from_numpy(np.array(X_train).astype(np.float32))
y_train = torch.from_numpy(np.array(y_train).astype(np.float32))
X_test = torch.from_numpy(np.array(X_test).astype(np.float32))
y_test = torch.from_numpy(np.array(y_test).astype(np.float32))
X_train.shape
X_test.shape
y_train.shape
y_test.shape
import torch.nn.functional as F
class Test_Model(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(13,32)
self.fc2 = nn.Linear(32,64)
self.fc3 = nn.Linear(64,128)
self.fc4 = nn.Linear(128,256)
self.fc5 = nn.Linear(256,512)
self.fc6 = nn.Linear(512,256)
self.fc7 = nn.Linear(256,1)
def forward(self,X):
preds = self.fc1(X)
preds = F.relu(preds)
preds = self.fc2(preds)
preds = F.relu(preds)
preds = self.fc3(preds)
preds = F.relu(preds)
preds = self.fc4(preds)
preds = F.relu(preds)
preds = self.fc5(preds)
preds = F.relu(preds)
preds = self.fc6(preds)
preds = F.relu(preds)
preds = self.fc7(preds)
return F.sigmoid(preds)
device = torch.device('cuda')
X_train = X_train.to(device)
y_train = y_train.to(device)
X_test = X_test.to(device)
y_test = y_test.to(device)
PROJECT_NAME = 'Heart-Disease-UCI'
def get_loss(criterion,X,y,model):
model.eval()
with torch.no_grad():
preds = model(X.float().to(device))
preds = preds.to(device)
y = y.to(device)
loss = criterion(preds,y)
model.train()
return loss.item()
def get_accuracy(X,y,model):
model.eval()
with torch.no_grad():
correct = 0
total = 0
for i in range(len(X)):
pred = model(X[i].float().to(device))
pred.to(device)
if round(int(pred[0])) == round(int(y[i])):
correct += 1
total += 1
if correct == 0:
correct += 1
model.train()
return round(correct/total,3)
import wandb
from tqdm import tqdm
EPOCHS = 100
model = Test_Model().to(device)
optimizer = torch.optim.SGD(model.parameters(),lr=0.25)
criterion = nn.L1Loss()
wandb.init(project=PROJECT_NAME,name='baseline')
for _ in tqdm(range(EPOCHS)):
preds = model(X_train.float().to(device))
preds = preds.view(len(preds),)
preds.to(device)
loss = criterion(preds,y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,X_test,y_test,model),'accuracy':get_accuracy(X_train,y_train,model),'val_accuracy':get_accuracy(X_test,y_test,model)})
wandb.finish()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/AdrianduPlessis/DS-Unit-2-Regression-Classification/blob/master/module3/assignment_regression_classification_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science, Unit 2: Predictive Modeling
# Regression & Classification, Module 3
## Assignment
We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.
But not just for condos in Tribeca...
Instead, predict property sales prices for **One Family Dwellings** (`BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'`) using a subset of the data where the **sale price was more than \\$100 thousand and less than $2 million.**
The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
- [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
- [ ] Do exploratory visualizations with Seaborn.
- [ ] Do one-hot encoding of categorical features.
- [ ] Do feature selection with `SelectKBest`.
- [ ] Fit a linear regression model with multiple features.
- [ ] Get mean absolute error for the test set.
- [ ] As always, commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- [ ] Add your own stretch goal(s) !
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
- [ ] Learn more about feature selection:
- ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance)
- [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html)
- [mlxtend](http://rasbt.github.io/mlxtend/) library
- scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection)
- [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.
- [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.
- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites).
(That book is good regardless of whether your cultural worldview is inferential statistics or predictive machine learning)
- [ ] Read Leo Breiman's paper, ["Statistical Modeling: The Two Cultures"](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html):
> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:
> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.
> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.
> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors.
```
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# category_encoders, version >= 2.0
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade category_encoders pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Change into directory for module
os.chdir('module3')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df = pd.read_csv('../data/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
'''
Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
Do exploratory visualizations with Seaborn.
Do one-hot encoding of categorical features.
Do feature selection with SelectKBest.
Fit a linear regression model with multiple features.
Get mean absolute error for the test set.
As always, commit your notebook to your fork of the GitHub repo.
'''
##Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
#Convert to datetime to facilitate split
df['SALE_DATE'] = pd.to_datetime(df['SALE_DATE'], infer_datetime_format=True)
#Define split location
test_start_barrier = pd.to_datetime('04-01-2019')
#Split data based on barrier
train = df[df.SALE_DATE < test_start_barrier]
test = df[df.SALE_DATE >= test_start_barrier]
#Sanity check
df.shape, train.shape, test.shape
numeric_cols.head()
##Do exploratory visualizations with Seaborn.
import seaborn as sns
import matplotlib.pyplot as plt
sns.catplot(x="BOROUGH", y='SALE_PRICE', data = train, kind = 'bar', color = 'grey');
sns.catplot(x="TAX_CLASS_AT_TIME_OF_SALE", y='SALE_PRICE', data = train, kind = 'bar', color = 'grey');
##Do one-hot encoding of categorical features.
#Identify binary encoded columns
#good piece to have fuction prepped for sprint
train.head()
#no binaries
train.describe(exclude='number')
```
|
github_jupyter
|
```
import pickle
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Reshape
from tensorflow.keras.layers import Conv1D
from tensorflow.keras.layers import MaxPooling1D
from tensorflow.keras.layers import GlobalAveragePooling1D
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv1D, Flatten, MaxPooling1D
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.datasets import load_iris
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('/content/drive/My Drive/University/ProjetML/')
os.chdir('Data/Donnees_ENT/')
# load in data with 80:20 split for training:test
with open('Final_Data_sets80.pickle', "rb") as f:
x_train, y_train, x_test, y_test = pickle.load(f)
# convert (498700, 32) to (498700, 32, 1) so fits in Conv1D input layer
x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], 1)
x_test = x_test.reshape(x_test.shape[0], x_test.shape[1], 1)
```
## CNN Architectures
1D CNN are often use for sensor data, or for time series data.

 A 1D Global max pooling
The difference here versus the previous models which consisted only of Dense layers is the presence of convolution layers for which this type of network is named. A convolution layer passes over one set of values to the next calculating a dot product as it goes, creating one value from multiple values. We can determine the quantity of kernels (the functions that iterate over the array to calculate the dot product) as well as the size of its window it uses to calculate the dot product, and the size of its stride. As it creates a new value from multiple, we would lose dimensionality each time we have a Conv layer so we use padding.
After the first Conv1D we add a pooling layer. This is to reduce the spatial size of the convolved features and also helps reduce overfitting which is a probelm with CNNs.Here we use max pooling instead of average pooling because taking the maximum instead of the average value from our kernel reduces noise by discarding noisy activations and so is better than average pooling.
### Basic Convolutional Neural Network
```
# Basic CNN
model = Sequential()
model.add(Conv1D(64, 2, activation="relu", input_shape=(32,1)))
model.add(Flatten())
model.add(Dense(5, activation = 'softmax'))
model.compile(loss = 'sparse_categorical_crossentropy',
optimizer = "adam",
metrics = ['accuracy'])
model.summary()
model.compile(loss = 'sparse_categorical_crossentropy',
optimizer = "adam",
metrics = ['accuracy'])
model.fit(x_train, y_train, batch_size=32,epochs=30, verbose=2, validation_split=0.2, callbacks=[tf.keras.callbacks.EarlyStopping('loss', patience=3)])
acc = model.evaluate(x_test, y_test)
print("Loss:", acc[0], " Accuracy:", acc[1])
```
## Intermediate Convolutional Neural Network
```
model = Sequential()
model.add(Conv1D(28, 2, activation="relu", input_shape=(32,1)))
model.add(Conv1D(28, 2, activation="relu"))
model.add(MaxPooling1D(2))
model.add(Conv1D(64, 2, activation="relu"))
model.add(Conv1D(64, 2,activation="relu"))
model.add(MaxPooling1D(2))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.3))
model.add(Dense(6, activation = 'softmax'))
model.compile(loss = 'sparse_categorical_crossentropy',
optimizer = "adam",
metrics = ['accuracy'])
model.fit(x_train, y_train, batch_size=32,epochs=30, verbose=2, validation_split=0.2, callbacks=[tf.keras.callbacks.EarlyStopping('loss', patience=3)])
acc = model.evaluate(x_test, y_test)
print("Loss:", acc[0], " Accuracy:", acc[1])
```
Principles of CNN Design:
- by convention channel size stays the same throughout network
- number of filters should start low and increase throughout the network
- keep adding layers until we over-fit then regularize using l1/l2 regularisation, droput, batch norm
- be inspired by patterns in classic networks such as Conv-Pool-Conv-Pool or Conv-Conv-Pool
```
callbacks_list = [
keras.callbacks.ModelCheckpoint(
filepath='best_model.{epoch:02d}-{val_loss:.2f}.h5',
monitor='val_loss', save_best_only=True),
keras.callbacks.EarlyStopping(monitor='accuracy', patience=1)]
model.fit(x_train, y_train, batch_size=32,epochs=50, verbose=2, callbacks=callbacks_list, validation_split=0.2)
acc = model.evaluate(x_test, y_test)
print("Loss:", acc[0], " Accuracy:", acc[1])
def build_model(hp):
model = Sequential()
model.add(Conv1D(hp.Int('n_filt_1', 4, 32, 4), 2, activation="relu", input_shape=(32,1)))
model.add(Conv1D(hp.Int('n_filt_1', 4, 32, 4), 2, activation="relu"))
model.add(MaxPooling1D())
for i in range(hp.Int('n_layers', 1, 12)):
filt_nb = hp.Int(f'conv_{i}_units', min_value=4, max_value=32, step=4)
model.add(Conv1D(filt_nb, hp.Int(f'kernal_{i}_size', 1, 4), activation="relu"))
model.add(Conv1D(filt_nb, hp.Int(f'kernal_{i}_size', 1, 4), activation="relu"))
model.add(MaxPooling1D())
model.add(Flatten())
model.add(BatchNormalization())
model.add(Dense(6, activation = 'softmax'))
adam=keras.optimizers.Adam(hp.Choice('learning_rate', [1e-2, 1e-3, 1e-4]))
model.compile(loss = 'sparse_categorical_crossentropy',
optimizer = adam,
metrics = ['accuracy'])
return model
from kerastuner import Hyperband
tuner = Hyperband(build_model, max_epochs=150, objective="val_accuracy",project_name="cnn", executions_per_trial=2)
# Display search space summary
tuner.search_space_summary()
tuner.search(x=x_train,y=y_train, validation_data=(x_test,y_test), callbacks=[tf.keras.callbacks.EarlyStopping('val_loss', patience=3)] )
tuner.results_summary()
tuner.get_best_hyperparameters(num_trials=1)
```
## Auto-Optimised Models
Here, before attempting to use more appropriate CNN and RNNs, we are going to attempt one last time to get the best performance using only dense/fully-connected layers, by using the keras-tuner package to tune the hyperparameters of our model. Usually in machine learning we manually change each of these through trial and error, but with this package we can automate the combinatory process of optimising each hyperparameter. Here we have decided to auto-optimise the hyperperameters controlling the number of hidden layers, and the nb of neurons in each of those hidden layers
```
!pip install keras-tuner
from kerastuner import RandomSearch
def build_model(hp):
d1 = hp.Int("d1_units", min_value=6, max_value=256, step=16)
model = keras.models.Sequential()
model.add(tf.keras.layers.Dense(d1, activation='relu', input_dim=32))
for i in range(hp.Int('n_layers', 1, 8)): # adding variation of layers.
model.add(Dense(hp.Int(f'conv_{i}_units',
min_value=6,
max_value=256,
step=16), activation='relu'))
model.add(Dense(6, activation='softmax'))
model.compile(optimizer="adam",loss="sparse_categorical_crossentropy",metrics=["accuracy"])
return model
tuner = RandomSearch(build_model, objective="val_accuracy", max_trials=5,executions_per_trial=5)
tuner.search(x=x_train,y=y_train,epochs=15,validation_data=(x_test,y_test))
```
### Construire le Modele
```
model = Sequential()
model.add(Conv1D(64, 2, activation="relu", input_shape=(4,1)))
model.add(Conv1D(64, 2, activation="relu"))
model.add(Dense(16, activation="relu"))
model.add(MaxPooling1D())
model.add(Flatten())
model.add(Dense(3, activation = 'softmax'))
model.compile(loss = 'sparse_categorical_crossentropy',
optimizer = "adam",
metrics = ['accuracy'])
model.summary()
model.fit(xtrain, ytrain, batch_size=16,epochs=100, verbose=2, validation_split=0.2)
```
|
github_jupyter
|
# Create redo records
This Jupyter notebook shows how to create a Senzing "redo record".
It assumes a G2 database that is empty.
Essentially the steps are to create very similar records under different data sources,
then delete one of the records. This produces a "redo record".
## G2Engine
### Senzing initialization
Create an instance of G2Engine, G2ConfigMgr, and G2Config.
```
from G2Engine import G2Engine
from G2ConfigMgr import G2ConfigMgr
from G2Config import G2Config
g2_engine = G2Engine()
try:
g2_engine_flags = G2Engine.G2_EXPORT_DEFAULT_FLAGS
g2_engine.initV2(
"pyG2EngineForRedoRecords",
senzing_config_json,
verbose_logging)
except G2Exception.G2ModuleGenericException as err:
print(g2_engine.getLastException())
g2_configuration_manager = G2ConfigMgr()
try:
g2_configuration_manager.initV2(
"pyG2ConfigMgrForRedoRecords",
senzing_config_json,
verbose_logging)
except G2Exception.G2ModuleGenericException as err:
print(g2_configuration_manager.getLastException())
g2_config = G2Config()
try:
g2_config.initV2(
"pyG2ConfigForRedoRecords",
senzing_config_json,
verbose_logging)
config_handle = g2_config.create()
except G2Exception.G2ModuleGenericException as err:
print(g2_config.getLastException())
```
### primeEngine
```
try:
g2_engine.primeEngine()
except G2Exception.G2ModuleGenericException as err:
print(g2_engine.getLastException())
```
### Variable initialization
```
load_id = None
```
### Create add data source function
Create a data source with a name having the form `TEST_DATA_SOURCE_nnn`.
```
def add_data_source(datasource_suffix):
datasource_prefix = "TEST_DATA_SOURCE_"
datasource_id = "{0}{1}".format(datasource_prefix, datasource_suffix)
configuration_comment = "Added {}".format(datasource_id)
g2_config.addDataSource(config_handle, datasource_id)
configuration_bytearray = bytearray()
return_code = g2_config.save(config_handle, configuration_bytearray)
configuration_json = configuration_bytearray.decode()
configuration_id_bytearray = bytearray()
g2_configuration_manager.addConfig(configuration_json, configuration_comment, configuration_id_bytearray)
g2_configuration_manager.setDefaultConfigID(configuration_id_bytearray)
g2_engine.reinitV2(configuration_id_bytearray)
```
### Create add record function
Create a record with the id having the form `RECORD_nnn`.
**Note:** this is essentially the same record with only the `DRIVERS_LICENSE_NUMBER` modified slightly.
```
def add_record(record_id_suffix, datasource_suffix):
datasource_prefix = "TEST_DATA_SOURCE_"
record_id_prefix = "RECORD_"
datasource_id = "{0}{1}".format(datasource_prefix, datasource_suffix)
record_id = "{0}{1}".format(record_id_prefix, record_id_suffix)
data = {
"NAMES": [{
"NAME_TYPE": "PRIMARY",
"NAME_LAST": "Smith",
"NAME_FIRST": "John",
"NAME_MIDDLE": "M"
}],
"PASSPORT_NUMBER": "PP11111",
"PASSPORT_COUNTRY": "US",
"DRIVERS_LICENSE_NUMBER": "DL1{:04d}".format(record_id_suffix),
"SSN_NUMBER": "111-11-1111"
}
data_as_json = json.dumps(data)
g2_engine.addRecord(
datasource_id,
record_id,
data_as_json,
load_id)
```
## Redo record
### Print data sources
Print the list of currently defined data sources.
```
try:
datasources_bytearray = bytearray()
g2_config.listDataSources(config_handle, datasources_bytearray)
datasources_dictionary = json.loads(datasources_bytearray.decode())
print(datasources_dictionary)
except G2Exception.G2ModuleGenericException as err:
print(g2_config.getLastException())
```
### Add data sources and records
```
try:
add_data_source(1)
add_record(1,1)
add_record(2,1)
add_data_source(2)
add_record(3,2)
add_record(4,2)
add_data_source(3)
add_record(5,3)
add_record(6,3)
except G2Exception.G2ModuleGenericException as err:
print(g2_engine.getLastException())
```
### Delete record
Deleting a record will create a "redo record".
```
try:
g2_engine.deleteRecord("TEST_DATA_SOURCE_3", "RECORD_5", load_id)
except G2Exception.G2ModuleGenericException as err:
print(g2_engine.getLastException())
```
### Count redo records
The `count_of_redo_records` will show how many redo records are in Senzing's queue of redo records.
```
try:
count_of_redo_records = g2_engine.countRedoRecords()
print("Number of redo records: {0}".format(count_of_redo_records))
except G2Exception.G2ModuleGenericException as err:
print(g2_engine.getLastException())
```
### Print data sources again
Print the list of currently defined data sources.
```
try:
datasources_bytearray = bytearray()
g2_config.listDataSources(config_handle, datasources_bytearray)
datasources_dictionary = json.loads(datasources_bytearray.decode())
print(datasources_dictionary)
except G2Exception.G2ModuleGenericException as err:
print(g2_config.getLastException())
```
|
github_jupyter
|
# Load packages and data
```
import numpy as np
import pandas as pd
import seaborn as sns; sns.set(style="ticks", color_codes=True)
import matplotlib.pyplot as plt
data = pd.read_csv('breast-cancer-wisconsin.txt', sep=",", header=0, index_col = 0)
data.dtypes
# some of the columns have bad data, since all columns contain numeric data
```
## Data cleaning
```
# check the categories present in the data
for column in list(data.columns[1:]):
print(column, data[column].unique())
# nas are represented in different ways as nan, '#', '?', and 'No idea'
# and some entreis have class value as 20 or 40, while expected 2 or 4
```
### Coerce all data to be numeric
This will make the wrong entires NAs
```
data_n = data.apply(pd.to_numeric, errors='coerce')
```
### Handel rows with values that are 10 times as large
```
dat_10 = data_n[data_n.Class > 4]
dat_d10 = dat_10.iloc[:,1:]/10
dat_d10.insert(0, 'ID', dat_10['ID'])
# all values seem to be 10 times larger
dat = data_n[data_n.Class <= 4].append(dat_d10)
```
## Check number of unique IDs:
```
print(dat.ID.unique().shape)
# there are only 665 unique IDs
print(dat.shape)
# But there are 15776 rows of data?
# look at how IDs repeated:
dat.groupby('ID').count().sort_values(by = 'Class', ascending=False).head(20).plot(y = "Class", kind = "barh")
plt.xlabel("Count")
plt.ylabel("ID")
plt.show
# look at the unique rows for these super-duplicated IDs:
id_dups = data.groupby('ID').count().sort_values(by = 'Class', ascending=False).head(15).reset_index().ID
data[data.ID.isin(id_dups)].drop_duplicates().sort_values('ID')
# each ID with lots of duplicates has only one or two unique rows with valid data
# also when there is missing value, clump thickness is the only column with full data
data_cleaned = dat.drop_duplicates()
print(data_cleaned.shape)
data_dups = data_cleaned.groupby('ID').count().\
sort_values(by = 'Class', ascending=False).reset_index()[['ID', 'Class']]\
.rename(columns={"Class": "ct"})
datac = data_cleaned.merge(data_dups, left_on='ID', right_on='ID')
datacc = datac[datac.ct == 1]
print(datacc.shape)
# there are 627 samples with only one row
```
- I am not sure whether a sample can have multiple rows
- But having multiple data points from a sample will violate the assumption of independence
- So I filtered the data down to the samples with only one entry
## Handling missing values
```
datacc[datacc.isnull().any(axis=1)]
# not a lot of NAs, and only in the Bare Nulei column
# will remove these
dataf = datacc.dropna()
dataf.groupby('Class').count()
# The two classes are fairly balanced
```
## Visualize the relationship between the different factors and cancer class:
```
# Transform data into the long format for vis
dat_long_for_vis = pd.melt(dataf.iloc[:, 1:11], id_vars=['Class'])
# Make histograms faceted by class
bins = np.linspace(0, 9, 10)
g = sns.FacetGrid(dat_long_for_vis, row = "variable", col="Class", margin_titles=True, sharey= False)
g.map(plt.hist, "value", color="steelblue", bins = bins)
# All of the features have some importance
corr = dataf.iloc[:, 1:11].corr()
corr
```
## Check correlations between features:
```
plt.figure(figsize=(8,8))
corr = dataf.iloc[:, 1:11].corr()
mask = np.triu(np.ones_like(corr))
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask = mask, cmap='Blues', annot=True, fmt='.2f', center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
# The correlations aren't super high
```
# Compare feature importance using modeling
```
# import libraries
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
logi = LogisticRegression()
logib = LogisticRegression(class_weight='balanced')
from sklearn.model_selection import cross_validate
from sklearn.metrics import recall_score
from sklearn.metrics import roc_curve
X = dataf.iloc[:, 1:10]
y = dataf['Class']/2 -1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=55, stratify = y)
cross_validate(logi, X_train, y_train, cv=5, scoring = 'recall_weighted')
scores = cross_validate(logib, X_train, y_train, cv=5, scoring = 'recall_weighted')
print(scores)
print(scores['test_score'].mean())
# balanced weight is better
model = logib.fit(X_train, y_train)
y_pred = model.predict(X_test)
# ROC curve:
from sklearn import metrics
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
roc_auc = metrics.auc(fpr, tpr)
display = metrics.RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=roc_auc, estimator_name='example estimator')
display.plot()
plt.show()
```
## Look at feature importance
Using coefficients from the logistic regression model
```
features = pd.DataFrame(data = {'feature': list(X.columns), 'importance': model.coef_.tolist()[0]})\
.sort_values(by = 'importance')
features
```
## Look at alternative feature importance
Since collecting these data could be expensive, can we collect less data?
What if we only collect one type of the data?
```
# Function to test feature importance using just that feature
def feature_test(name):
scores = cross_validate(logib, X_train[name][:, np.newaxis], y_train, cv=5, scoring = 'recall_weighted')
return(scores['test_score'].mean())
s = []
for name in list(X.columns):
s.append([name, feature_test(name)])
features2 = features.merge(pd.DataFrame(s, columns = ["feature", "single_importance"]),
left_on='feature', right_on='feature')
sns.barplot(x="importance", y="feature", data=features2,
color="b")
plt.title("Feature importance for full model")
sns.set_color_codes("muted")
sns.barplot(x="single_importance", y="feature", data=features2,
color="dodgerblue")
plt.title("Recall for models with a single feature")
```
## Feature selection using recursive elimination
```
from sklearn.feature_selection import RFECV
selector = RFECV(logib, step=1, cv=5, scoring = 'recall_weighted')
selector.fit(X_train, y_train)
selector.ranking_
reduced_features = [x for x, y in zip(list(X_train.columns),list(selector.ranking_)) if y == 1]
scores = cross_validate(logib, X_train[reduced_features], y_train, cv=5, scoring = 'recall_weighted')
print(scores['test_score'].mean())
model_r = logib.fit(X_train[reduced_features], y_train)
r_features = pd.DataFrame(data = {'feature': reduced_features, 'importance_r': model_r.coef_.tolist()[0]})\
.sort_values(by = 'importance_r')
sns.barplot(x="importance_r", y="feature", data=r_features,
color="b")
plt.title("Feature importance for reduced model")
```
## Exporting all feature importance measures for bubble charts
```
features_all = features2.merge(r_features, how = 'outer',
left_on='feature', right_on='feature')
features_all.to_csv("feature_importance.csv")
```
# Conclusion
The three most importance features that predict the cancer status of a sample are:
- the number of Bare Nuclei
- the Uniformaty of cell size
- Mitoses
If data is available on all nine features, the logistic regression model has a 97% recall.
However, if resources is extremely limited, uniformity of cell size is the most informative feature on its own.
|
github_jupyter
|
# Deutsch-Jozsa Algorithm
In this section, we first introduce the Deutsch-Jozsa problem, and classical and quantum algorithms to solve it. We then implement the quantum algorithm using Qiskit, and run it on a simulator and device.
## Contents
1. [Introduction](#introduction)
1.1 [Deutsch-Jozsa Problem](#djproblem)
1.2 [Deutsch-Jozsa Algorithm](#classical-solution)
1.3 [The Quantum Solution](#quantum-solution)
1.4 [Why Does This Work?](#why-does-this-work)
2. [Worked Example](#example)
3. [Creating Quantum Oracles](#creating-quantum-oracles)
4. [Qiskit Implementation](#implementation)
4.1 [Constant Oracle](#const_oracle)
4.2 [Balanced Oracle](#balanced_oracle)
4.3 [The Full Algorithm](#full_alg)
4.4 [Generalised Circuit](#general_circs)
5. [Running on Real Devices](#device)
6. [Problems](#problems)
7. [References](#references)
## 1. Introduction <a id='introduction'></a>
The Deutsch-Jozsa algorithm, first introduced in Reference [1], was the first example of a quantum algorithm that performs better than the best classical algorithm. It showed that there can be advantages to using a quantum computer as a computational tool for a specific problem.
### 1.1 Deutsch-Jozsa Problem <a id='djproblem'> </a>
We are given a hidden Boolean function $f$, which takes as input a string of bits, and returns either $0$ or $1$, that is:
$$
f(\{x_0,x_1,x_2,...\}) \rightarrow 0 \textrm{ or } 1 \textrm{ , where } x_n \textrm{ is } 0 \textrm{ or } 1$$
The property of the given Boolean function is that it is guaranteed to either be balanced or constant. A constant function returns all $0$'s or all $1$'s for any input, while a balanced function returns $0$'s for exactly half of all inputs and $1$'s for the other half. Our task is to determine whether the given function is balanced or constant.
Note that the Deutsch-Jozsa problem is an $n$-bit extension of the single bit Deutsch problem.
### 1.2 The Classical Solution <a id='classical-solution'> </a>
Classically, in the best case, two queries to the oracle can determine if the hidden Boolean function, $f(x)$, is balanced:
e.g. if we get both $f(0,0,0,...)\rightarrow 0$ and $f(1,0,0,...) \rightarrow 1$, then we know the function is balanced as we have obtained the two different outputs.
In the worst case, if we continue to see the same output for each input we try, we will have to check exactly half of all possible inputs plus one in order to be certain that $f(x)$ is constant. Since the total number of possible inputs is $2^n$, this implies that we need $2^{n-1}+1$ trial inputs to be certain that $f(x)$ is constant in the worst case. For example, for a $4$-bit string, if we checked $8$ out of the $16$ possible combinations, getting all $0$'s, it is still possible that the $9^\textrm{th}$ input returns a $1$ and $f(x)$ is balanced. Probabilistically, this is a very unlikely event. In fact, if we get the same result continually in succession, we can express the probability that the function is constant as a function of $k$ inputs as:
$$ P_\textrm{constant}(k) = 1 - \frac{1}{2^{k-1}} \qquad \textrm{for } k \leq 2^{n-1}$$
Realistically, we could opt to truncate our classical algorithm early, say if we were over x% confident. But if we want to be 100% confident, we would need to check $2^{n-1}+1$ inputs.
### 1.3 Quantum Solution <a id='quantum-solution'> </a>
Using a quantum computer, we can solve this problem with 100% confidence after only one call to the function $f(x)$, provided we have the function $f$ implemented as a quantum oracle, which maps the state $\vert x\rangle \vert y\rangle $ to $ \vert x\rangle \vert y \oplus f(x)\rangle$, where $\oplus$ is addition modulo $2$. Below is the generic circuit for the Deutsh-Jozsa algorithm.

Now, let's go through the steps of the algorithm:
<ol>
<li>
Prepare two quantum registers. The first is an $n$-qubit register initialised to $|0\rangle$, and the second is a one-qubit register initialised to $|1\rangle$:
$$\vert \psi_0 \rangle = \vert0\rangle^{\otimes n} \vert 1\rangle$$
</li>
<li>
Apply a Hadamard gate to each qubit:
$$\vert \psi_1 \rangle = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1} \vert x\rangle \left(|0\rangle - |1 \rangle \right)$$
</li>
<li>
Apply the quantum oracle $\vert x\rangle \vert y\rangle$ to $\vert x\rangle \vert y \oplus f(x)\rangle$:
$$
\begin{aligned}
\lvert \psi_2 \rangle
& = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1} \vert x\rangle (\vert f(x)\rangle - \vert 1 \oplus f(x)\rangle) \\
& = \frac{1}{\sqrt{2^{n+1}}}\sum_{x=0}^{2^n-1}(-1)^{f(x)}|x\rangle ( |0\rangle - |1\rangle )
\end{aligned}
$$
since for each $x,f(x)$ is either $0$ or $1$.
</li>
<li>
At this point the second single qubit register may be ignored. Apply a Hadamard gate to each qubit in the first register:
$$
\begin{aligned}
\lvert \psi_3 \rangle
& = \frac{1}{2^n}\sum_{x=0}^{2^n-1}(-1)^{f(x)}
\left[ \sum_{y=0}^{2^n-1}(-1)^{x \cdot y}
\vert y \rangle \right] \\
& = \frac{1}{2^n}\sum_{y=0}^{2^n-1}
\left[ \sum_{x=0}^{2^n-1}(-1)^{f(x)}(-1)^{x \cdot y} \right]
\vert y \rangle
\end{aligned}
$$
where $x \cdot y = x_0y_0 \oplus x_1y_1 \oplus \ldots \oplus x_{n-1}y_{n-1}$ is the sum of the bitwise product.
</li>
<li>
Measure the first register. Notice that the probability of measuring $\vert 0 \rangle ^{\otimes n} = \lvert \frac{1}{2^n}\sum_{x=0}^{2^n-1}(-1)^{f(x)} \rvert^2$, which evaluates to $1$ if $f(x)$ is constant and $0$ if $f(x)$ is balanced.
</li>
</ol>
### 1.4 Why Does This Work? <a id='why-does-this-work'> </a>
- **Constant Oracle**
When the oracle is *constant*, it has no effect (up to a global phase) on the input qubits, and the quantum states before and after querying the oracle are the same. Since the H-gate is its own inverse, in Step 4 we reverse Step 2 to obtain the initial quantum state of $|00\dots 0\rangle$ in the first register.
$$
H^{\otimes n}\begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}
=
\tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}
\quad \xrightarrow{\text{after } U_f} \quad
H^{\otimes n}\tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}
=
\begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}
$$
- **Balanced Oracle**
After step 2, our input register is an equal superposition of all the states in the computational basis. When the oracle is *balanced*, phase kickback adds a negative phase to exactly half these states:
$$
U_f \tfrac{1}{\sqrt{2^n}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{bmatrix}
=
\tfrac{1}{\sqrt{2^n}}\begin{bmatrix} -1 \\ 1 \\ -1 \\ \vdots \\ 1 \end{bmatrix}
$$
The quantum state after querying the oracle is orthogonal to the quantum state before querying the oracle. Thus, in Step 4, when applying the H-gates, we must end up with a quantum state that is orthogonal to $|00\dots 0\rangle$. This means we should never measure the all-zero state.
## 2. Worked Example <a id='example'></a>
Let's go through a specfic example for a two bit balanced function, where we apply X-gates before both CNOTs.
<ol>
<li> The first register of two qubits is initialized to $|00\rangle$ and the second register qubit to $|1\rangle$
$$\lvert \psi_0 \rangle = \lvert 0 0 \rangle_1 \lvert 1 \rangle_2 $$
</li>
<li> Apply Hadamard on all qubits
$$\lvert \psi_1 \rangle = \frac{1}{2} \left( \lvert 0 0 \rangle_1 + \lvert 0 1 \rangle_1 + \lvert 1 0 \rangle_1 + \lvert 1 1 \rangle_1 \right) \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle_2 - \lvert 1 \rangle_2 \right) $$
</li>
<li> The oracle function can be implemented as $\text{Q}_f = CX_{1a}CX_{2a}$,
$$
\begin{align*}
\lvert \psi_2 \rangle = \frac{1}{2\sqrt{2}} \left[ \lvert 0 0 \rangle_1 \left( \lvert 0 \oplus 0 \oplus 0 \rangle_2 - \lvert 1 \oplus 0 \oplus 0 \rangle_2 \right) \\
+ \lvert 0 1 \rangle_1 \left( \lvert 0 \oplus 0 \oplus 1 \rangle_2 - \lvert 1 \oplus 0 \oplus 1 \rangle_2 \right) \\
+ \lvert 1 0 \rangle_1 \left( \lvert 0 \oplus 1 \oplus 0 \rangle_2 - \lvert 1 \oplus 1 \oplus 0 \rangle_2 \right) \\
+ \lvert 1 1 \rangle_1 \left( \lvert 0 \oplus 1 \oplus 1 \rangle_2 - \lvert 1 \oplus 1 \oplus 1 \rangle_2 \right) \right]
\end{align*}
$$
</li>
<li>Thus
$$
\begin{aligned}
\lvert \psi_2 \rangle & = \frac{1}{2\sqrt{2}} \left[ \lvert 0 0 \rangle_1 \left( \lvert 0 \rangle_2 - \lvert 1 \rangle_2 \right) - \lvert 0 1 \rangle_1 \left( \lvert 0 \rangle_2 - \lvert 1 \rangle_2 \right) - \lvert 1 0 \rangle_1 \left( \lvert 0 \rangle_2 - \lvert 1 \rangle_2 \right) + \lvert 1 1 \rangle_1 \left( \lvert 0 \rangle_2 - \lvert 1 \rangle_2 \right) \right] \\
& = \frac{1}{2} \left( \lvert 0 0 \rangle_1 - \lvert 0 1 \rangle_1 - \lvert 1 0 \rangle_1 + \lvert 1 1 \rangle_1 \right) \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle_2 - \lvert 1 \rangle_2 \right) \\
& = \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle_{10} - \lvert 1 \rangle_{10} \right)\frac{1}{\sqrt{2}} \left( \lvert 0 \rangle_{11} - \lvert 1 \rangle_{11} \right)\frac{1}{\sqrt{2}} \left( \lvert 0 \rangle_2 - \lvert 1 \rangle_2 \right)
\end{aligned}
$$
</li>
<li> Apply Hadamard on the first register
$$ \lvert \psi_3\rangle = \lvert 1 \rangle_{10} \lvert 1 \rangle_{11} \left( \lvert 0 \rangle_2 - \lvert 1 \rangle_2 \right) $$
</li>
<li> Measuring the first two qubits will give the non-zero $11$, indicating a balanced function.
</li>
</ol>
You can try out similar examples using the widget below. Press the buttons to add H-gates and oracles, re-run the cell and/or set `case="constant"` to try out different oracles.
```
from qiskit_textbook.widgets import dj_widget
dj_widget(size="small", case="balanced")
```
## 3. Creating Quantum Oracles <a id='creating-quantum-oracles'> </a>
Let's see some different ways we can create a quantum oracle.
For a constant function, it is simple:
$\qquad$ 1. if f(x) = 0, then apply the $I$ gate to the qubit in register 2.
$\qquad$ 2. if f(x) = 1, then apply the $X$ gate to the qubit in register 2.
For a balanced function, there are many different circuits we can create. One of the ways we can guarantee our circuit is balanced is by performing a CNOT for each qubit in register 1, with the qubit in register 2 as the target. For example:

In the image above, the top three qubits form the input register, and the bottom qubit is the output register. We can see which states give which output in the table below:
| States that output 0 | States that output 1 |
|:--------------------:|:--------------------:|
| 000 | 001 |
| 011 | 100 |
| 101 | 010 |
| 110 | 111 |
We can change the results while keeping them balanced by wrapping selected controls in X-gates. For example, see the circuit and its results table below:

| States that output 0 | States that output 1 |
|:--------------------:|:--------------------:|
| 001 | 000 |
| 010 | 011 |
| 100 | 101 |
| 111 | 110 |
## 4. Qiskit Implementation <a id='implementation'></a>
We now implement the Deutsch-Jozsa algorithm for the example of a three-bit function, with both constant and balanced oracles. First let's do our imports:
```
# initialization
import numpy as np
# importing Qiskit
from qiskit import IBMQ, BasicAer
from qiskit.providers.ibmq import least_busy
from qiskit import QuantumCircuit, execute
# import basic plot tools
from qiskit.visualization import plot_histogram
```
Next, we set the size of the input register for our oracle:
```
# set the length of the n-bit input string.
n = 3
```
### 4.1 Constant Oracle <a id='const_oracle'></a>
Let's start by creating a constant oracle, in this case the input has no effect on the ouput so we just randomly set the output qubit to be 0 or 1:
```
# set the length of the n-bit input string.
n = 3
const_oracle = QuantumCircuit(n+1)
output = np.random.randint(2)
if output == 1:
const_oracle.x(n)
const_oracle.draw()
```
### 4.2 Balanced Oracle <a id='balanced_oracle'></a>
```
balanced_oracle = QuantumCircuit(n+1)
```
Next, we create a balanced oracle. As we saw in section 1b, we can create a balanced oracle by performing CNOTs with each input qubit as a control and the output bit as the target. We can vary the input states that give 0 or 1 by wrapping some of the controls in X-gates. Let's first choose a binary string of length `n` that dictates which controls to wrap:
```
b_str = "101"
```
Now we have this string, we can use it as a key to place our X-gates. For each qubit in our circuit, we place an X-gate if the corresponding digit in `b_str` is `1`, or do nothing if the digit is `0`.
```
balanced_oracle = QuantumCircuit(n+1)
b_str = "101"
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
balanced_oracle.draw()
```
Next, we do our controlled-NOT gates, using each input qubit as a control, and the output qubit as a target:
```
balanced_oracle = QuantumCircuit(n+1)
b_str = "101"
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
# Use barrier as divider
balanced_oracle.barrier()
# Controlled-NOT gates
for qubit in range(n):
balanced_oracle.cx(qubit, n)
balanced_oracle.barrier()
balanced_oracle.draw()
```
Finally, we repeat the code from two cells up to finish wrapping the controls in X-gates:
```
balanced_oracle = QuantumCircuit(n+1)
b_str = "101"
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
# Use barrier as divider
balanced_oracle.barrier()
# Controlled-NOT gates
for qubit in range(n):
balanced_oracle.cx(qubit, n)
balanced_oracle.barrier()
# Place X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
balanced_oracle.x(qubit)
# Show oracle
balanced_oracle.draw()
```
We have just created a balanced oracle! All that's left to do is see if the Deutsch-Joza algorithm can solve it.
### 4.3 The Full Algorithm <a id='full_alg'></a>
Let's now put everything together. This first step in the algorithm is to initialise the input qubits in the state $|{+}\rangle$ and the output qubit in the state $|{-}\rangle$:
```
dj_circuit = QuantumCircuit(n+1, n)
# Apply H-gates
for qubit in range(n):
dj_circuit.h(qubit)
# Put qubit in state |->
dj_circuit.x(n)
dj_circuit.h(n)
dj_circuit.draw()
```
Next, let's apply the oracle. Here we apply the `balanced_oracle` we created above:
```
dj_circuit = QuantumCircuit(n+1, n)
# Apply H-gates
for qubit in range(n):
dj_circuit.h(qubit)
# Put qubit in state |->
dj_circuit.x(n)
dj_circuit.h(n)
# Add oracle
dj_circuit += balanced_oracle
dj_circuit.draw()
```
Finally, we perform H-gates on the $n$-input qubits, and measure our input register:
```
dj_circuit = QuantumCircuit(n+1, n)
# Apply H-gates
for qubit in range(n):
dj_circuit.h(qubit)
# Put qubit in state |->
dj_circuit.x(n)
dj_circuit.h(n)
# Add oracle
dj_circuit += balanced_oracle
# Repeat H-gates
for qubit in range(n):
dj_circuit.h(qubit)
dj_circuit.barrier()
# Measure
for i in range(n):
dj_circuit.measure(i, i)
# Display circuit
dj_circuit.draw()
```
Let's see the output:
```
# use local simulator
backend = BasicAer.get_backend('qasm_simulator')
shots = 1024
results = execute(dj_circuit, backend=backend, shots=shots).result()
answer = results.get_counts()
plot_histogram(answer)
```
We can see from the results above that we have a 0% chance of measuring `000`. This correctly predicts the function is balanced.
### 4.4 Generalised Circuits <a id='general_circs'></a>
Below, we provide a generalised function that creates Deutsch-Joza oracles and turns them into quantum gates. It takes the `case`, (either `'balanced'` or '`constant`', and `n`, the size of the input register:
```
def dj_oracle(case, n):
# We need to make a QuantumCircuit object to return
# This circuit has n+1 qubits: the size of the input,
# plus one output qubit
oracle_qc = QuantumCircuit(n+1)
# First, let's deal with the case in which oracle is balanced
if case == "balanced":
# First generate a random number that tells us which CNOTs to
# wrap in X-gates:
b = np.random.randint(1,2**n)
# Next, format 'b' as a binary string of length 'n', padded with zeros:
b_str = format(b, '0'+str(n)+'b')
# Next, we place the first X-gates. Each digit in our binary string
# corresponds to a qubit, if the digit is 0, we do nothing, if it's 1
# we apply an X-gate to that qubit:
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
oracle_qc.x(qubit)
# Do the controlled-NOT gates for each qubit, using the output qubit
# as the target:
for qubit in range(n):
oracle_qc.cx(qubit, n)
# Next, place the final X-gates
for qubit in range(len(b_str)):
if b_str[qubit] == '1':
oracle_qc.x(qubit)
# Case in which oracle is constant
if case == "constant":
# First decide what the fixed output of the oracle will be
# (either always 0 or always 1)
output = np.random.randint(2)
if output == 1:
oracle_qc.x(n)
oracle_gate = oracle_qc.to_gate()
oracle_gate.name = "Oracle" # To show when we display the circuit
return oracle_gate
```
Let's also create a function that takes this oracle gate and performs the Deutsch-Joza algorithm on it:
```
def dj_algorithm(oracle, n):
dj_circuit = QuantumCircuit(n+1, n)
# Set up the output qubit:
dj_circuit.x(n)
dj_circuit.h(n)
# And set up the input register:
for qubit in range(n):
dj_circuit.h(qubit)
# Let's append the oracle gate to our circuit:
dj_circuit.append(oracle, range(n+1))
# Finally, perform the H-gates again and measure:
for qubit in range(n):
dj_circuit.h(qubit)
for i in range(n):
dj_circuit.measure(i, i)
return dj_circuit
```
Finally, let's use these functions to play around with the algorithm:
```
n = 4
oracle_gate = dj_oracle('balanced', n)
dj_circuit = dj_algorithm(oracle_gate, n)
dj_circuit.draw()
```
And see the results of running this circuit:
```
results = execute(dj_circuit, backend=backend, shots=1024).result()
answer = results.get_counts()
plot_histogram(answer)
```
## 5. Experiment with Real Devices <a id='device'></a>
We can run the circuit on the real device as shown below. We first look for the least-busy device that can handle our circuit.
```
# Load our saved IBMQ accounts and get the least busy backend device with greater than or equal to (n+1) qubits
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= (n+1) and
not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
# Run our circuit on the least busy backend. Monitor the execution of the job in the queue
from qiskit.tools.monitor import job_monitor
shots = 1024
job = execute(dj_circuit, backend=backend, shots=shots, optimization_level=3)
job_monitor(job, interval = 2)
# Get the results of the computation
results = job.result()
answer = results.get_counts()
plot_histogram(answer)
```
As we can see, the most likely result is `1111`. The other results are due to errors in the quantum computation.
## 6. Problems <a id='problems'></a>
1. Are you able to create a balanced or constant oracle of a different form?
```
from qiskit_textbook.problems import dj_problem_oracle
oracle = dj_problem_oracle(1)
```
2. The function `dj_problem_oracle` (shown above) returns a Deutsch-Joza oracle for `n = 4` in the form of a gate. The gate takes 5 qubits as input where the final qubit (`q_4`) is the output qubit (as with the example oracles above). You can get different oracles by giving `dj_problem_oracle` different integers between 1 and 5. Use the Deutsch-Joza algorithm to decide whether each oracle is balanced or constant (**Note:** It is highly recommended you try this example using the `qasm_simulator` instead of a real device).
## 7. References <a id='references'></a>
1. David Deutsch and Richard Jozsa (1992). "Rapid solutions of problems by quantum computation". Proceedings of the Royal Society of London A. 439: 553–558. [doi:10.1098/rspa.1992.0167](https://doi.org/10.1098%2Frspa.1992.0167).
2. R. Cleve; A. Ekert; C. Macchiavello; M. Mosca (1998). "Quantum algorithms revisited". Proceedings of the Royal Society of London A. 454: 339–354. [doi:10.1098/rspa.1998.0164](https://doi.org/10.1098%2Frspa.1998.0164).
```
import qiskit
qiskit.__qiskit_version__
```
|
github_jupyter
|
# Residual Networks
Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously practically feasible.
**In this assignment, you will:**
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.
This assignment will be done in Keras.
Before jumping into the problem, let's run the cell below to load the required packages.
```
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
```
## 1 - The problem of very deep neural networks
Last week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.
The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow. More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values).
During training, you might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:
<img src="images/vanishing_grad_kiank.png" style="width:450px;height:220px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Vanishing gradient** <br> The speed of learning decreases very rapidly for the early layers as the network trains </center></caption>
You are now going to solve this problem by building a Residual Network!
## 2 - Building a Residual Network
In ResNets, a "shortcut" or a "skip connection" allows the gradient to be directly backpropagated to earlier layers:
<img src="images/skip_connection_kiank.png" style="width:650px;height:200px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : A ResNet block showing a **skip-connection** <br> </center></caption>
The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.
We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function--even more than skip connections helping with vanishing gradients--accounts for ResNets' remarkable performance.)
Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them.
### 2.1 - The identity block
The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:
<img src="images/idblock2_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 2 layers. </center></caption>
The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras!
In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this:
<img src="images/idblock3_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 3 layers.</center></caption>
Here're the individual steps.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the seed for the random initialization.
- The first BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same" and its name should be `conv_name_base + '2b'`. Use 0 as the seed for the random initialization.
- The second BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2c'`. Use 0 as the seed for the random initialization.
- The third BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component.
Final step:
- The shortcut and the input are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the ResNet identity block. We have implemented the first component of the main path. Please read over this carefully to make sure you understand what it is doing. You should implement the rest.
- To implement the Conv2D step: [See reference](https://keras.io/layers/convolutional/#conv2d)
- To implement BatchNorm: [See reference](https://faroit.github.io/keras-docs/1.2.2/layers/normalization/) (axis: Integer, the axis that should be normalized (typically the channels axis))
- For the activation, use: `Activation('relu')(X)`
- To add the value passed forward by the shortcut: [See reference](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 3
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f=2, filters=[2, 4, 6], stage=1, block='a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
</td>
</tr>
</table>
## 2.2 - The convolutional block
You've implemented the ResNet identity block. Next, the ResNet "convolutional block" is the other type of block. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:
<img src="images/convblock_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Convolutional block** </center></caption>
The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.
The details of the convolutional block are as follows.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '2a'`.
- The first BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of (f,f) and a stride of (1,1). Its padding is "same" and it's name should be `conv_name_base + '2b'`.
- The second BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be `conv_name_base + '2c'`.
- The third BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component.
Shortcut path:
- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '1'`.
- The BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '1'`.
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.
- [Conv Hint](https://keras.io/layers/convolutional/#conv2d)
- [BatchNorm Hint](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- For the activation, use: `Activation('relu')(X)`
- [Addition Hint](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s=2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(filters=F1, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '2a', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters=F2, kernel_size=(f, f), strides=(1, 1), padding='same', name=conv_name_base + '2b', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters=F3, kernel_size=(1, 1), strides=(1, 1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = Conv2D(filters=F3, kernel_size=(1, 1), strides=(s, s), padding='valid', name=conv_name_base + '1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis=3, name=bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f=2, filters=[2, 4, 6], stage=1, block='a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
</td>
</tr>
</table>
## 3 - Building your first ResNet model (50 layers)
You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together.
<img src="images/resnet_kiank.png" style="width:850px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 5** </u><font color='purple'> : **ResNet-50 model** </center></caption>
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1".
- BatchNorm is applied to the channels axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three set of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a".
- The 2 identity blocks use three set of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".
- Stage 3:
- The convolutional block uses three set of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".
- The 3 identity blocks use three set of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
- Stage 4:
- The convolutional block uses three set of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
- The 5 identity blocks use three set of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
- Stage 5:
- The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
- The 2 identity blocks use three set of filters of size [256, 256, 2048], "f" is 3 and the blocks are "b" and "c".
- The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".
- The flatten doesn't have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be `'fc' + str(classes)`.
**Exercise**: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.
You'll need to use this function:
- Average pooling [see reference](https://keras.io/layers/pooling/#averagepooling2d)
Here're some other functions we used in the code below:
- Conv2D: [See reference](https://keras.io/layers/convolutional/#conv2d)
- BatchNorm: [See reference](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: [See reference](https://keras.io/layers/convolutional/#zeropadding2d)
- Max pooling: [See reference](https://keras.io/layers/pooling/#maxpooling2d)
- Fully conected layer: [See reference](https://keras.io/layers/core/#dense)
- Addition: [See reference](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape=(64, 64, 3), classes=6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides=(2, 2), name='conv1', kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3, name='bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f=3, filters=[64, 64, 256], stage=2, block='a', s=1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
# Stage 3 (≈4 lines)
X = convolutional_block(X, f=3, filters=[128, 128, 512], stage=3, block='a', s=2)
X = identity_block(X, 3, [128, 128, 512], stage=3, block='b')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='c')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='d')
# Stage 4 (≈6 lines)
X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block='a', s=2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')
# Stage 5 (≈3 lines)
X = X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block='a', s=2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D(pool_size=(2, 2), padding='same')(X)
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer=glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs=X_input, outputs=X, name='ResNet50')
return model
```
Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below.
```
model = ResNet50(input_shape=(64, 64, 3), classes=6)
```
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
```
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
```
The model is now ready to be trained. The only thing you need is a dataset.
Let's load the SIGNS Dataset.
<img src="images/signs_data_kiank.png" style="width:450px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 6** </u><font color='purple'> : **SIGNS dataset** </center></caption>
```
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig / 255.
X_test = X_test_orig / 255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print("number of training examples = " + str(X_train.shape[0]))
print("number of test examples = " + str(X_test.shape[0]))
print("X_train shape: " + str(X_train.shape))
print("Y_train shape: " + str(Y_train.shape))
print("X_test shape: " + str(X_test.shape))
print("Y_test shape: " + str(Y_test.shape))
```
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
```
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
```
**Expected Output**:
<table>
<tr>
<td>
** Epoch 1/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours.
</td>
</tr>
<tr>
<td>
** Epoch 2/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing.
</td>
</tr>
</table>
Let's see how this model (trained on only two epochs) performs on the test set.
```
preds = model.evaluate(X_test, Y_test)
print("Loss = " + str(preds[0]))
print("Test Accuracy = " + str(preds[1]))
```
**Expected Output**:
<table>
<tr>
<td>
**Test Accuracy**
</td>
<td>
between 0.16 and 0.25
</td>
</tr>
</table>
For the purpose of this assignment, we've asked you to train the model only for two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.
After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU.
Using a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.
```
model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print("Loss = " + str(preds[0]))
print("Test Accuracy = " + str(preds[1]))
```
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Congratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system!
## 4 - Test on your own image (Optional/Ungraded)
If you wish, you can also take a picture of your own hand and see the output of the model. To do this:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))
```
You can also print a summary of your model by running the following code.
```
model.summary()
```
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
```
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
```
<font color='blue'>
**What you should remember:**
- Very deep "plain" networks don't work in practice because they are hard to train due to vanishing gradients.
- The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function.
- There are two main type of blocks: The identity block and the convolutional block.
- Very deep Residual Networks are built by stacking these blocks together.
### References
This notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the github repository of Francois Chollet:
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - [Deep Residual Learning for Image Recognition (2015)](https://arxiv.org/abs/1512.03385)
- Francois Chollet's github repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py
|
github_jupyter
|
# ReSyPE Training Pipeline
We introduce a framework for training arbitrary machine learning models to perform collaborative filtering on small and large datasets. Given the utility matrix as input, we outline two approaches for model training as discussed by C. Aggarwal in his book on *Recommender Systems*. We also propose an extension of these methodologies by applying clustering on the dataset before the model training.
## Machine Learning + Collaborative Filtering (ML+CS) Recommender System
### Approach 1: ML-based Collaborative Filtering on Utility Matrix with Reduced Dimensions
1. Fill Utility Matrix with mean of matrix
1. Choose column j to where missing ratings will be predicted. Column j will be the label in the model while the features will be the rest of the columns (not equal to j).
1. Perform SVD on feature matrix. This will be the new feature table used to predict the ratings for item j.
1. Train a model using the feature matrix as input and column j as output
1. Repeat 2, 3, 4 for all items/columns.
### Approach 2: Iterative Approach to ML-based Item-wise Collaborative Filtering
1. Mean-center each row of the utility matrix to remove user bias.
1. Replace missing values with zero after mean centering.
1. Choose column j to where missing ratings will be predicted. Column j will be the label in the model while the features will be the rest of the columns (not equal to j).
1. Train a model using the feature matrix as input and column j as output
1. Predict missing ratings for column j.
1. Use the predicted values to update the missing ratings in the utility matrix.
1. Perform steps 3, 4, 5, 6 for all columns.
1. Iterate steps 3 to 7 until the predicted ratings converge.
### Approach 3: ML and Content-Based Collaborative Filtering
1. Generate user features and item features
1. Concatenate the user features and item features for every user-item pair wherein a user has rated an item.
1. Perform a stratified splitting of the data into train and test sets where the test set is a fraction of the items a user has not rated. Each user must have a minimum number of items rated to be part of the training process.
1. Train a model using the concatenated user-item feature table to predict the rating for each user-item pair in the training set.
1. Use the trained model to predict the rating for all items a user has not rated.
1. Select the items with the highest rating as the recommendations.
## Training on Large Datasets
The diagram below shows the flowchart of the proposed method for model training. If the dataset is small (left branch), we use the two approaches metioned above and apply them to the raw utility matrix. Since we are training one model per item, a limitation of these methods is that they are computationally expensive especially when the iterative approach is used. Hence we need a more scalable solution. One way to do this assigning users and/or items into clusters and deriving a new utility matrix that contains the representative ratings per cluster.
After clustering, the cluster-based utility matrix contains the aggregate ratings of each cluster. The collaborative filtering problem is now reduced to prediction of ratings per user- or item-cluster instead of predicting the ratings for all users and items.

### Model training for clustered data
1. Generate multiple sets of synthetic data containing unknown/missing ratings. We do this by randomly setting elements of the cluster-based utility matrix to NaN.
1. For each matrix of synthetic data, we apply the iterative and the SVD approach to predict the missing ratings.
1. Get all predictions from each matrix of synthetic data and get the mean across all datasets. This will be the treated as the final cluster-based predictions of the RS.
|
github_jupyter
|
## Interpreting your pruning and regularization experiments
This notebook contains code to be included in your own notebooks by adding this line at the top of your notebook:<br>
```%run distiller_jupyter_helpers.ipynb```
```
# Relative import of code from distiller, w/o installing the package
import os
import sys
import distiller.utils
import distiller
import distiller.apputils.checkpoint
import torch
import torchvision
import os
import collections
import matplotlib.pyplot as plt
import numpy as np
def to_np(x):
return x.cpu().numpy()
def flatten(weights):
weights = weights.clone().view(weights.numel())
weights = to_np(weights)
return weights
import scipy.stats as stats
def plot_params_hist_single(name, weights_pytorch, remove_zeros=False, kmeans=None):
weights = flatten(weights_pytorch)
if remove_zeros:
weights = weights[weights!=0]
n, bins, patches = plt.hist(weights, bins=200)
plt.title(name)
if kmeans is not None:
labels = kmeans.labels_
centroids = kmeans.cluster_centers_
cnt_coefficients = [len(labels[labels==i]) for i in range(16)]
# Normalize the coefficients so they display in the same range as the float32 histogram
cnt_coefficients = [cnt / 5 for cnt in cnt_coefficients]
centroids, cnt_coefficients = zip(*sorted(zip(centroids, cnt_coefficients)))
cnt_coefficients = list(cnt_coefficients)
centroids = list(centroids)
if remove_zeros:
for i in range(len(centroids)):
if abs(centroids[i]) < 0.0001: # almost zero
centroids.remove(centroids[i])
cnt_coefficients.remove(cnt_coefficients[i])
break
plt.plot(centroids, cnt_coefficients)
zeros = [0] * len(centroids)
plt.plot(centroids, zeros, 'r+', markersize=15)
h = cnt_coefficients
hmean = np.mean(h)
hstd = np.std(h)
pdf = stats.norm.pdf(h, hmean, hstd)
#plt.plot(h, pdf)
plt.show()
print("mean: %f\nstddev: %f" % (weights.mean(), weights.std()))
print("size=%s %d elements" % distiller.size2str(weights_pytorch.size()))
print("min: %.3f\nmax:%.3f" % (weights.min(), weights.max()))
def plot_params_hist(params, which='weight', remove_zeros=False):
for name, weights_pytorch in params.items():
if which not in name:
continue
plot_params_hist_single(name, weights_pytorch, remove_zeros)
def plot_params2d(classifier_weights, figsize, binary_mask=True,
gmin=None, gmax=None,
xlabel="", ylabel="", title=""):
if not isinstance(classifier_weights, list):
classifier_weights = [classifier_weights]
for weights in classifier_weights:
assert weights.dim() in [2,4], "something's wrong"
shape_str = distiller.size2str(weights.size())
volume = distiller.volume(weights)
# Clone because we are going to change the tensor values
if binary_mask:
weights2d = weights.clone()
else:
weights2d = weights
if weights.dim() == 4:
weights2d = weights2d.view(weights.size()[0] * weights.size()[1], -1)
sparsity = len(weights2d[weights2d==0]) / volume
# Move to CPU so we can plot it.
if weights2d.is_cuda:
weights2d = weights2d.cpu()
cmap='seismic'
# create a binary image (non-zero elements are black; zeros are white)
if binary_mask:
cmap='binary'
weights2d[weights2d!=0] = 1
fig = plt.figure(figsize=figsize)
if (not binary_mask) and (gmin is not None) and (gmax is not None):
if isinstance(gmin, torch.Tensor):
gmin = gmin.item()
gmax = gmax.item()
plt.imshow(weights2d, cmap=cmap, vmin=gmin, vmax=gmax)
else:
plt.imshow(weights2d, cmap=cmap, vmin=0, vmax=1)
#plt.figure(figsize=(20,40))
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.title(title)
plt.colorbar( pad=0.01, fraction=0.01)
plt.show()
print("sparsity = %.1f%% (nnz=black)" % (sparsity*100))
print("size=%s = %d elements" % (shape_str, volume))
def printk(k):
"""Print the values of the elements of a kernel as a list"""
print(list(k.view(k.numel())))
def plot_param_kernels(weights, layout, size_ctrl, binary_mask=False, color_normalization='Model',
gmin=None, gmax=None, interpolation=None, first_kernel=0):
ofms, ifms = weights.size()[0], weights.size()[1]
kw, kh = weights.size()[2], weights.size()[3]
print("min=%.4f\tmax=%.4f" % (weights.min(), weights.max()))
shape_str = distiller.size2str(weights.size())
volume = distiller.volume(weights)
print("size=%s = %d elements" % (shape_str, volume))
# Clone because we are going to change the tensor values
weights = weights.clone()
if binary_mask:
weights[weights!=0] = 1
# Take the inverse of the pixels, because we want zeros to appear white
#weights = 1 - weights
kernels = weights.view(ofms * ifms, kh, kw)
nrow, ncol = layout[0], layout[1]
# Move to CPU so we can plot it.
if kernels.is_cuda:
kernels = kernels.cpu()
# Plot the graph
plt.gray()
#plt.tight_layout()
fig = plt.figure( figsize=(layout[0]*size_ctrl, layout[1]*size_ctrl) );
# We want to normalize the grayscale brightness levels for all of the images we display (group),
# otherwise, each image is normalized separately and this causes distortion between the different
# filters images we ddisplay.
# We don't normalize across all of the filters images, because the outliers cause the image of each
# filter to be very muted. This is because each group of filters we display usually has low variance
# between the element values of that group.
if color_normalization=='Tensor':
gmin = weights.min()
gmax = weights.max()
elif color_normalization=='Group':
gmin = weights[0:nrow, 0:ncol].min()
gmax = weights[0:nrow, 0:ncol].max()
print("gmin=%.4f\tgmax=%.4f" % (gmin, gmax))
if isinstance(gmin, torch.Tensor):
gmin = gmin.item()
gmax = gmax.item()
i = 0
for row in range(0, nrow):
for col in range (0, ncol):
ax = fig.add_subplot(layout[0], layout[1], i+1)
if binary_mask:
ax.matshow(kernels[first_kernel+i], cmap='binary', vmin=0, vmax=1);
else:
# Use siesmic so that colors around the center are lighter. Red and blue are used
# to represent (and visually separate) negative and positive weights
ax.matshow(kernels[first_kernel+i], cmap='seismic', vmin=gmin, vmax=gmax, interpolation=interpolation);
ax.set(xticks=[], yticks=[])
i += 1
def l1_norm_histogram(weights):
"""Compute a histogram of the L1-norms of the kernels of a weights tensor.
The L1-norm of a kernel is one way to quantify the "magnitude" of the total coeffiecients
making up this kernel.
Another interesting look at filters is to compute a histogram per filter.
"""
ofms, ifms = weights.size()[0], weights.size()[1]
kw, kh = weights.size()[2], weights.size()[3]
kernels = weights.view(ofms * ifms, kh, kw)
if kernels.is_cuda:
kernels = kernels.cpu()
l1_hist = []
for kernel in range(ofms*ifms):
l1_hist.append(kernels[kernel].norm(1))
return l1_hist
def plot_l1_norm_hist(weights):
l1_hist = l1_norm_histogram(weights)
n, bins, patches = plt.hist(l1_hist, bins=200)
plt.title('Kernel L1-norm histograms')
plt.ylabel('Frequency')
plt.xlabel('Kernel L1-norm')
plt.show()
def plot_layer_sizes(which, sparse_model, dense_model):
dense = []
sparse = []
names = []
for name, sparse_weights in sparse_model.state_dict().items():
if ('weight' not in name) or (which!='*' and which not in name):
continue
sparse.append(len(sparse_weights[sparse_weights!=0]))
names.append(name)
for name, dense_weights in dense_model.state_dict().items():
if ('weight' not in name) or (which!='*' and which not in name):
continue
dense.append(dense_weights.numel())
N = len(sparse)
ind = np.arange(N) # the x locations for the groups
fig, ax = plt.subplots()
width = .47
p1 = plt.bar(ind, dense, width = .47, color = '#278DBC')
p2 = plt.bar(ind, sparse, width = 0.35, color = '#000099')
plt.ylabel('Size')
plt.title('Layer sizes')
plt.xticks(rotation='vertical')
plt.xticks(ind, names)
#plt.yticks(np.arange(0, 100, 150))
plt.legend((p1[0], p2[0]), ('Dense', 'Sparse'))
#Remove plot borders
for location in ['right', 'left', 'top', 'bottom']:
ax.spines[location].set_visible(False)
#Fix grid to be horizontal lines only and behind the plots
ax.yaxis.grid(color='gray', linestyle='solid')
ax.set_axisbelow(True)
plt.show()
def conv_param_names(model):
return [param_name for param_name, p in model.state_dict().items()
if (p.dim()>2) and ("weight" in param_name)]
def conv_fc_param_names(model):
return [param_name for param_name, p in model.state_dict().items()
if (p.dim()>1) and ("weight" in param_name)]
def conv_fc_params(model):
return [(param_name,p) for (param_name, p) in model.state_dict()
if (p.dim()>1) and ("weight" in param_name)]
def fc_param_names(model):
return [param_name for param_name, p in model.state_dict().items()
if (p.dim()==2) and ("weight" in param_name)]
def plot_bars(which, setA, setAName, setB, setBName, names, title):
N = len(setA)
ind = np.arange(N) # the x locations for the groups
fig, ax = plt.subplots(figsize=(20,10))
width = .47
p1 = plt.bar(ind, setA, width = .47, color = '#278DBC')
p2 = plt.bar(ind, setB, width = 0.35, color = '#000099')
plt.ylabel('Size')
plt.title(title)
plt.xticks(rotation='vertical')
plt.xticks(ind, names)
#plt.yticks(np.arange(0, 100, 150))
plt.legend((p1[0], p2[0]), (setAName, setBName))
#Remove plot borders
for location in ['right', 'left', 'top', 'bottom']:
ax.spines[location].set_visible(False)
#Fix grid to be horizontal lines only and behind the plots
ax.yaxis.grid(color='gray', linestyle='solid')
ax.set_axisbelow(True)
plt.show()
```
|
github_jupyter
|
# Confounding Example: Finding causal effects from observed data
Suppose you are given some data with treatment and outcome. Can you determine whether the treatment causes the outcome, or is the correlation purely due to another common cause?
```
import os, sys
sys.path.append(os.path.abspath("../../"))
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import math
import datasets
```
# Let's create a mystery dataset. Need to find if there is a causal effect.
Creating the dataset. It is generated from either of two models:
* **Model 1**: Treatment does cause outcome.
* **Model 2**: Treatment does not cause outcome. All observed correlation is due to a common cause.
```
rvar = 1 if np.random.uniform() >0.5 else 0
data_dict = datasets.xy_dataset(10000, effect=rvar, sd_error=0.2)
df = data_dict['df']
print(df[["Treatment", "Outcome", "w0"]].head())
dowhy.plotter.plot_treatment_outcome(df[data_dict["treatment_name"]], df[data_dict["outcome_name"]],
df[data_dict["time_val"]])
```
# Using DoWhy to resolve the mystery: *Does Treatment cause Outcome variable?*
## STEP 1: Model the problem as a causal graph
Initializing the causal model.
```
model= CausalModel(
data=df,
treatment=data_dict["treatment_name"],
outcome=data_dict["outcome_name"],
common_causes=data_dict["common_causes_names"],
instruments=data_dict["instrument_names"])
model.view_model(layout="dot")
```
<img src="causal_model.png">
## STEP 2: Identify causal effect using properties of the formal causal graph
Identify the causal effect using properties of the causal graph.
```
identified_estimand = model.identify_effect()
print(identified_estimand)
```
## STEP 3: Estimate the causal effect
Once we have the identified estimand, can use any statistical method to estimate the causal effect.
Let's use Linear Regression for simplicity.
```
estimate = model.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression")
print("Causal Estimate is " + str(estimate.value))
# Plot Slope of line between treamtent and outcome =causal effect
dowhy.plotter.plot_causal_effect(estimate, df[data_dict["treatment_name"]], df[data_dict["outcome_name"]])
```
### Checking if the estimate is correct
```
print("DoWhy estimate is " + str(estimate.value))
print ("Actual true causal effect was {0}".format(rvar))
```
## Step 4: Refuting the estimate
We can also refute the estimate to check its robustness to assumptions (*aka* sensitivity analysis, but on steroids).
### Adding a random common cause variable
```
res_random=model.refute_estimate(identified_estimand, estimate, method_name="random_common_cause")
print(res_random)
```
### Replacing treatment with a random (placebo) variable
```
res_placebo=model.refute_estimate(identified_estimand, estimate,
method_name="placebo_treatment_refuter", placebo_type="permute")
print(res_placebo)
```
### Removing a random subset of the data
```
res_subset=model.refute_estimate(identified_estimand, estimate,
method_name="data_subset_refuter", subset_fraction=0.9)
print(res_subset)
```
As you can see, our causal estimator is robust to simple refutations.
|
github_jupyter
|
# Identify calibration targets
Attempt to automatically locate the calibration target in each scene. If this fails for any images then the target can be located manually. Note that the automated method only works for scenes containing a single target!
Calibration spectra ( measured radiance vs known reflectance ) are then extracted from the targets and stored in the image .hdr files.
```
import os
import glob
import numpy as np
from tqdm.auto import tqdm
import hylite
import hylite.io as io
from hylite.correct import Panel
```
## Define data directories
```
# input directory containing images to locate (these should all be captured from about the same location)
path = '/Users/thiele67/Documents/Data/SPAIN/2020_Sierra_Bullones/20200309_sun/elc'
image_paths = glob.glob( os.path.join(path,"*.hdr"), recursive=True )
print("Found %d images:" % len(image_paths))
for p in image_paths:
print(p)
```
## Define calibration panel material
```
from hylite.reference.spectra import R90, R50, PVC_Red, PVC_White, PVC_Grey # load calibration material spectra
M = R90 # define calibration panel material
```
## Attempt to automatically identify targets
```
for i,p in enumerate(tqdm(image_paths)):
image = io.loadWithGDAL( p ) #load image
image.set_as_nan(0) # set nans
target = Panel(M,image,method='auto', bands=hylite.RGB) # look for panel
#plot target
fig,ax = target.quick_plot()
fig.suptitle("%d: %s" % (i,p))
fig.show()
#add to header
image.header.add_panel(target)
#save
outpath = io.matchHeader(p)[0]
io.saveHeader(outpath, image.header)
```
## If necessary, manually pick some targets
```
assert False, "Pause here and turn your brain on! ツ"
incorrect = [0,1,2,3] # choose incorrectly identified targets to manually select
```
First, clear incorrectly set targets from header file.
```
for i in incorrect:
image = io.loadWithGDAL( image_paths[i] )
image.header.remove_panel(None) # remove panels
outpath = io.matchHeader(image_paths[i])[0]
io.saveHeader(outpath, image.header)
```
If targets do exist in scene, manually select them. Skip this step if no targets exist.
```
targets = []
for i in incorrect:
image = io.loadWithGDAL( image_paths[i] ) #load image
target = Panel(M,image,method='manual',bands=hylite.RGB) # select panel
#add to header
image.header.add_panel(target)
#save
outpath = io.matchHeader(image_paths[i])[0]
io.saveHeader(outpath, image.header)
targets.append(target) # store for plotting
#plot targets
%matplotlib inline
for i,t in enumerate(targets):
#plot target
fig,ax = t.quick_plot()
fig.suptitle("%d: %s" % (incorrect[i],image_paths[incorrect[i]]))
fig.show()
```
|
github_jupyter
|
```
%matplotlib inline
```
Neural Transfer Using PyTorch
=============================
**Author**: `Alexis Jacq <https://alexis-jacq.github.io>`_
**Edited by**: `Winston Herring <https://github.com/winston6>`_
**Re-implemented by:** `Shubhajit Das <https://github.com/Shubhajitml>`
Introduction
------------
This tutorial explains how to implement the `Neural-Style algorithm <https://arxiv.org/abs/1508.06576>`__
developed by Leon A. Gatys, Alexander S. Ecker and Matthias Bethge.
Neural-Style, or Neural-Transfer, allows you to take an image and
reproduce it with a new artistic style. The algorithm takes three images,
an input image, a content-image, and a style-image, and changes the input
to resemble the content of the content-image and the artistic style of the style-image.
.. figure:: /_static/img/neural-style/neuralstyle.png
:alt: content1
Underlying Principle
--------------------
The principle is simple: we define two distances, one for the content
($D_C$) and one for the style ($D_S$). $D_C$ measures how different the content
is between two images while $D_S$ measures how different the style is
between two images. Then, we take a third image, the input, and
transform it to minimize both its content-distance with the
content-image and its style-distance with the style-image. Now we can
import the necessary packages and begin the neural transfer.
Importing Packages and Selecting a Device
-----------------------------------------
Below is a list of the packages needed to implement the neural transfer.
- ``torch``, ``torch.nn``, ``numpy`` (indispensables packages for
neural networks with PyTorch)
- ``torch.optim`` (efficient gradient descents)
- ``PIL``, ``PIL.Image``, ``matplotlib.pyplot`` (load and display
images)
- ``torchvision.transforms`` (transform PIL images into tensors)
- ``torchvision.models`` (train or load pre-trained models)
- ``copy`` (to deep copy the models; system package)
```
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
!ls
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from PIL import Image
import matplotlib.pyplot as plt
import torchvision.transforms as transforms
import torchvision.models as models
import copy
```
Next, we need to choose which device to run the network on and import the
content and style images. Running the neural transfer algorithm on large
images takes longer and will go much faster when running on a GPU. We can
use ``torch.cuda.is_available()`` to detect if there is a GPU available.
Next, we set the ``torch.device`` for use throughout the tutorial. Also the ``.to(device)``
method is used to move tensors or modules to a desired device.
```
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
```
Loading the Images
------------------
Now we will import the style and content images. The original PIL images have values between 0 and 255, but when
transformed into torch tensors, their values are converted to be between
0 and 1. The images also need to be resized to have the same dimensions.
An important detail to note is that neural networks from the
torch library are trained with tensor values ranging from 0 to 1. If you
try to feed the networks with 0 to 255 tensor images, then the activated
feature maps will be unable sense the intended content and style.
However, pre-trained networks from the Caffe library are trained with 0
to 255 tensor images.
.. Note::
Here are links to download the images required to run the tutorial:
`picasso.jpg <https://pytorch.org/tutorials/_static/img/neural-style/picasso.jpg>`__ and
`dancing.jpg <https://pytorch.org/tutorials/_static/img/neural-style/dancing.jpg>`__.
Download these two images and add them to a directory
with name ``images`` in your current working directory.
```
# desired size of the output image
imsize = 512 if torch.cuda.is_available() else 128 # use small size if no gpu
loader = transforms.Compose([
transforms.Resize(imsize), # scale imported image
transforms.ToTensor()]) # transform it into a torch tensor
def image_loader(image_name):
image = Image.open(image_name)
# fake batch dimension required to fit network's input dimensions
image = loader(image).unsqueeze(0)
return image.to(device, torch.float)
style_img = image_loader("colorful.jpg")
content_img = image_loader("shubha.jpg")
assert style_img.size() == content_img.size(), \
"we need to import style and content images of the same size"
```
Now, let's create a function that displays an image by reconverting a
copy of it to PIL format and displaying the copy using
``plt.imshow``. We will try displaying the content and style images
to ensure they were imported correctly.
```
unloader = transforms.ToPILImage() # reconvert into PIL image
plt.ion()
def imshow(tensor, title=None):
image = tensor.cpu().clone() # we clone the tensor to not do changes on it
image = image.squeeze(0) # remove the fake batch dimension
image = unloader(image)
plt.imshow(image)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
plt.figure()
imshow(style_img, title='Style Image')
plt.figure()
imshow(content_img, title='Content Image')
```
Loss Functions
--------------
Content Loss
~~~~~~~~~~~~
The content loss is a function that represents a weighted version of the
content distance for an individual layer. The function takes the feature
maps $F_{XL}$ of a layer $L$ in a network processing input $X$ and returns the
weighted content distance $w_{CL}.D_C^L(X,C)$ between the image $X$ and the
content image $C$. The feature maps of the content image($F_{CL}$) must be
known by the function in order to calculate the content distance. We
implement this function as a torch module with a constructor that takes
$F_{CL}$ as an input. The distance $\|F_{XL} - F_{CL}\|^2$ is the mean square error
between the two sets of feature maps, and can be computed using ``nn.MSELoss``.
We will add this content loss module directly after the convolution
layer(s) that are being used to compute the content distance. This way
each time the network is fed an input image the content losses will be
computed at the desired layers and because of auto grad, all the
gradients will be computed. Now, in order to make the content loss layer
transparent we must define a ``forward`` method that computes the content
loss and then returns the layer’s input. The computed loss is saved as a
parameter of the module.
```
class ContentLoss(nn.Module):
def __init__(self, target,):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.target = target.detach()
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input
```
.. Note::
**Important detail**: although this module is named ``ContentLoss``, it
is not a true PyTorch Loss function. If you want to define your content
loss as a PyTorch Loss function, you have to create a PyTorch autograd function
to recompute/implement the gradient manually in the ``backward``
method.
Style Loss
~~~~~~~~~~
The style loss module is implemented similarly to the content loss
module. It will act as a transparent layer in a
network that computes the style loss of that layer. In order to
calculate the style loss, we need to compute the gram matrix $G_{XL}$. A gram
matrix is the result of multiplying a given matrix by its transposed
matrix. In this application the given matrix is a reshaped version of
the feature maps $F_{XL}$ of a layer $L$. $F_{XL}$ is reshaped to form $\hat{F}_{XL}$, a $K$\ x\ $N$
matrix, where $K$ is the number of feature maps at layer $L$ and $N$ is the
length of any vectorized feature map $F_{XL}^k$. For example, the first line
of $\hat{F}_{XL}$ corresponds to the first vectorized feature map $F_{XL}^1$.
Finally, the gram matrix must be normalized by dividing each element by
the total number of elements in the matrix. This normalization is to
counteract the fact that $\hat{F}_{XL}$ matrices with a large $N$ dimension yield
larger values in the Gram matrix. These larger values will cause the
first layers (before pooling layers) to have a larger impact during the
gradient descent. Style features tend to be in the deeper layers of the
network so this normalization step is crucial.
```
def gram_matrix(input):
a, b, c, d = input.size() # a=batch size(=1)
# b=number of feature maps
# (c,d)=dimensions of a f. map (N=c*d)
features = input.view(a * b, c * d) # resise F_XL into \hat F_XL
G = torch.mm(features, features.t()) # compute the gram product
# we 'normalize' the values of the gram matrix
# by dividing by the number of element in each feature maps.
return G.div(a * b * c * d)
```
Now the style loss module looks almost exactly like the content loss
module. The style distance is also computed using the mean square
error between $G_{XL}$ and $G_{SL}$.
```
class StyleLoss(nn.Module):
def __init__(self, target_feature):
super(StyleLoss, self).__init__()
self.target = gram_matrix(target_feature).detach()
def forward(self, input):
G = gram_matrix(input)
self.loss = F.mse_loss(G, self.target)
return input
```
Importing the Model
-------------------
Now we need to import a pre-trained neural network. We will use a 19
layer VGG network like the one used in the paper.
PyTorch’s implementation of VGG is a module divided into two child
``Sequential`` modules: ``features`` (containing convolution and pooling layers),
and ``classifier`` (containing fully connected layers). We will use the
``features`` module because we need the output of the individual
convolution layers to measure content and style loss. Some layers have
different behavior during training than evaluation, so we must set the
network to evaluation mode using ``.eval()``.
```
cnn = models.vgg19(pretrained=True).features.to(device).eval()
```
Additionally, VGG networks are trained on images with each channel
normalized by mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225].
We will use them to normalize the image before sending it into the network.
```
cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device)
# create a module to normalize input image so we can easily put it in a
# nn.Sequential
class Normalization(nn.Module):
def __init__(self, mean, std):
super(Normalization, self).__init__()
# .view the mean and std to make them [C x 1 x 1] so that they can
# directly work with image Tensor of shape [B x C x H x W].
# B is batch size. C is number of channels. H is height and W is width.
self.mean = torch.tensor(mean).view(-1, 1, 1)
self.std = torch.tensor(std).view(-1, 1, 1)
def forward(self, img):
# normalize img
return (img - self.mean) / self.std
```
A ``Sequential`` module contains an ordered list of child modules. For
instance, ``vgg19.features`` contains a sequence (Conv2d, ReLU, MaxPool2d,
Conv2d, ReLU…) aligned in the right order of depth. We need to add our
content loss and style loss layers immediately after the convolution
layer they are detecting. To do this we must create a new ``Sequential``
module that has content loss and style loss modules correctly inserted.
```
# desired depth layers to compute style/content losses :
content_layers_default = ['conv_4']
style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']
def get_style_model_and_losses(cnn, normalization_mean, normalization_std,
style_img, content_img,
content_layers=content_layers_default,
style_layers=style_layers_default):
cnn = copy.deepcopy(cnn)
# normalization module
normalization = Normalization(normalization_mean, normalization_std).to(device)
# just in order to have an iterable access to or list of content/syle
# losses
content_losses = []
style_losses = []
# assuming that cnn is a nn.Sequential, so we make a new nn.Sequential
# to put in modules that are supposed to be activated sequentially
model = nn.Sequential(normalization)
i = 0 # increment every time we see a conv
for layer in cnn.children():
if isinstance(layer, nn.Conv2d):
i += 1
name = 'conv_{}'.format(i)
elif isinstance(layer, nn.ReLU):
name = 'relu_{}'.format(i)
# The in-place version doesn't play very nicely with the ContentLoss
# and StyleLoss we insert below. So we replace with out-of-place
# ones here.
layer = nn.ReLU(inplace=False)
elif isinstance(layer, nn.MaxPool2d):
name = 'pool_{}'.format(i)
elif isinstance(layer, nn.BatchNorm2d):
name = 'bn_{}'.format(i)
else:
raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__))
model.add_module(name, layer)
if name in content_layers:
# add content loss:
target = model(content_img).detach()
content_loss = ContentLoss(target)
model.add_module("content_loss_{}".format(i), content_loss)
content_losses.append(content_loss)
if name in style_layers:
# add style loss:
target_feature = model(style_img).detach()
style_loss = StyleLoss(target_feature)
model.add_module("style_loss_{}".format(i), style_loss)
style_losses.append(style_loss)
# now we trim off the layers after the last content and style losses
for i in range(len(model) - 1, -1, -1):
if isinstance(model[i], ContentLoss) or isinstance(model[i], StyleLoss):
break
model = model[:(i + 1)]
return model, style_losses, content_losses
```
Next, we select the input image. You can use a copy of the content image
or white noise.
```
input_img = content_img.clone()
# if you want to use white noise instead uncomment the below line:
# input_img = torch.randn(content_img.data.size(), device=device)
# add the original input image to the figure:
plt.figure()
imshow(input_img, title='Input Image')
```
Gradient Descent
----------------
As Leon Gatys, the author of the algorithm, suggested `here <https://discuss.pytorch.org/t/pytorch-tutorial-for-neural-transfert-of-artistic-style/336/20?u=alexis-jacq>`__, we will use
L-BFGS algorithm to run our gradient descent. Unlike training a network,
we want to train the input image in order to minimise the content/style
losses. We will create a PyTorch L-BFGS optimizer ``optim.LBFGS`` and pass
our image to it as the tensor to optimize.
```
def get_input_optimizer(input_img):
# this line to show that input is a parameter that requires a gradient
optimizer = optim.LBFGS([input_img.requires_grad_()])
return optimizer
```
Finally, we must define a function that performs the neural transfer. For
each iteration of the networks, it is fed an updated input and computes
new losses. We will run the ``backward`` methods of each loss module to
dynamicaly compute their gradients. The optimizer requires a “closure”
function, which reevaluates the modul and returns the loss.
We still have one final constraint to address. The network may try to
optimize the input with values that exceed the 0 to 1 tensor range for
the image. We can address this by correcting the input values to be
between 0 to 1 each time the network is run.
```
def run_style_transfer(cnn, normalization_mean, normalization_std,
content_img, style_img, input_img, num_steps=500,
style_weight=1000000, content_weight=1):
"""Run the style transfer."""
print('Building the style transfer model..')
model, style_losses, content_losses = get_style_model_and_losses(cnn,
normalization_mean, normalization_std, style_img, content_img)
optimizer = get_input_optimizer(input_img)
print('Optimizing..')
run = [0]
while run[0] <= num_steps:
def closure():
# correct the values of updated input image
input_img.data.clamp_(0, 1)
optimizer.zero_grad()
model(input_img)
style_score = 0
content_score = 0
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
style_score *= style_weight
content_score *= content_weight
loss = style_score + content_score
loss.backward()
run[0] += 1
if run[0] % 50 == 0:
print("run {}:".format(run))
print('Style Loss : {:4f} Content Loss: {:4f}'.format(
style_score.item(), content_score.item()))
print()
return style_score + content_score
optimizer.step(closure)
# a last correction...
input_img.data.clamp_(0, 1)
return input_img
```
Finally, we can run the algorithm.
```
output = run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std,
content_img, style_img, input_img)
plt.figure()
imshow(output, title='Output Image')
# sphinx_gallery_thumbnail_number = 4
plt.ioff()
plt.show()
```
|
github_jupyter
|
## Linear Regression
### PyTorch Model Designing Steps
1. **Design your model using class with Variables**
2. **Construct loss and optimizer (select from PyTorch API)**
3. **Training cycle (forward, backward, update)**
### Step #1 : Design your model using class with Variables
```
from torch import nn
import torch
from torch import tensor
import matplotlib.pyplot as plt
x_data = tensor([[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]])
y_data = tensor([[2.0], [4.0], [6.0], [8.0], [10.0], [12.0]])
# Hyper-parameters
input_size = 1
output_size = 1
num_epochs = 100
learning_rate = 0.01
print(torch.__version__)
print(torch.cuda.get_device_name())
```
### Using GPU for the PyTorch Models
Remember always 2 things must be on GPU
- model
- tensors
```
class Model(nn.Module):
def __init__(self):
"""
In the constructor we instantiate nn.Linear module
"""
super().__init__()
self.linear = torch.nn.Linear(input_size, output_size) # One in and one out
def forward(self, x):
"""
In the forward function we accept a Variable of input data and we must return
a Variable of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Variables.
"""
y_pred = self.linear(x)
return y_pred
# our model
model = Model()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
```
### Explanations:-
`torch.nn.Linear(in_features, out_features, bias=True)`
Applies a linear transformation to the incoming data: $y = W^T * x + b$
**Parameters:**
- `in_features `– size of each input sample (i.e. size of x)
- `out_features` – size of each output sample (i.e. size of y)
- `bias` – If set to False, the layer will not learn an additive bias. **Default: True**
###Step #2 : Construct loss and optimizer (select from PyTorch API)
```
# Construct our loss function and an Optimizer. The call to model.parameters()
# in the SGD constructor will contain the learnable parameters
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
```
### Explanations:-
MSE Loss: Mean Squared Error (**Default: 'mean'**)
- $\hat y$ : prediction
- $y$ : true value
$MSE \ (sum) = \sum_{i=1}^n(\hat y_i - y_i)^2$
$MSE \ (mean) = \frac{1}{n} \sum_{i=1}^n(\hat y_i - y_i)^2$
###Step #3 : Training: forward, loss, backward, step
```
# Credit: https://github.com/jcjohnson/pytorch-examples
# Training loop
for epoch in range(num_epochs):
# 1) Forward pass: Compute predicted y by passing x to the model
y_pred = model(x_data.to(device))
# 2) Compute and print loss
loss = criterion(y_pred, y_data.to(device))
print(f'Epoch: {epoch} | Loss: {loss.item()} ')
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
# After training
hour_var = tensor([[7.0]]).to(device)
y_pred = model(hour_var)
print("Prediction (after training)", 7, model(hour_var).data[0][0].item())
```
### Explanations:-
- Calling `.backward()` mutiple times accumulates the gradient (**by addition**) for each parameter.
- This is why you should call `optimizer.zero_grad()` after each .step() call.
- Note that following the first `.backward` call, a second call is only possible after you have performed another **forward pass**.
- `optimizer.step` performs a parameter update based on the current gradient (**stored in .grad attribute of a parameter**)
### Simplified equation:-
- `parameters = parameters - learning_rate * parameters_gradients`
- parameters $W$ and $b$ in ($y = W^T * x + b$)
- $\theta = \theta - \eta \cdot \nabla_\theta$ [ General parameter $\theta$ ]
* $\theta$ : parameters (our variables)
* $\eta$ : learning rate (how fast we want to learn)
* $\nabla_\theta$ : parameters' gradients
### Plot of predicted and actual values
```
# Clear figure
plt.clf()
# Get predictions
predictions = model(x_data.to(device)).cpu().detach().numpy()
# Plot true data
plt.plot(x_data, y_data, 'go', label='True data', alpha=0.5)
# Plot predictions
plt.plot(x_data, predictions, '--', label='Predictions', alpha=0.5)
# Legend and plot
plt.legend(loc='best')
plt.show()
```
### Saving Model to Directory
```
from google.colab import drive
drive.mount('/content/gdrive')
root_path = '/content/gdrive/My Drive/AUST Docs/AUST Teaching Docs/AUST Spring 2020/CSE 4238/Lab 02/'
```
### Save Model
```
save_model = True
if save_model is True:
# Saves only parameters
# wights & biases
torch.save(model.state_dict(), root_path + 'linear_regression.pkl')
# Save the model checkpoint
# torch.save(model.state_dict(), root_path + 'linear_regression.ckpt')
```
### Load Model
```
load_model = True
if load_model is True:
model.load_state_dict(torch.load(root_path + 'linear_regression.pkl'))
```
### Try Other Optimizers
- torch.optim.Adagrad
- torch.optim.Adam
- torch.optim.Adamax
- torch.optim.ASGD
- torch.optim.LBFGS
- torch.optim.RMSprop
- torch.optim.Rprop
- torch.optim.SGD
### *** Official PyTorch Tutorials ***
https://pytorch.org/tutorials/
|
github_jupyter
|
# **Blockchain Analytics**
Analysis of Bitcoin blockchain data (transaction graph) to find potential indicators of incidents
_by Dhruv Chopra and Siddhant Pathak_
## **Import all the necessary libraries**
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
import networkx as nx
import time
import requests
sb.set()
```
## **Query the data from the Blockchain API and organize it**
```
response = requests.get("https://blockchain.info/rawtx/0e3e2357e806b6cdb1f70b54c3a3a17b6714ee1f0e68bebb44a74b1efd512098")
jason = response.json()
#response = requests.get("https://blockchain.info/q/getreceivedbyaddress/12c6DSiU4Rq3P4ZxziKxzrL5LmMBrzjrJX")
jason
new = jason['out'][0]['addr']
new
response = requests.get("https://blockchain.info/rawaddr/"+new)
transactions = response.json()
transactions
rev = transactions['txs']
rev.reverse()
for i in rev:
print(i['block_height'])
```
## **Creating the graph**
```
stack1 = []
stack2 = []
g = nx.MultiDiGraph()
address='12c6DSiU4Rq3P4ZxziKxzrL5LmMBrzjrJX'
stack1
stack1.append(address)
color=[]
edge_color=[]
completed = []
level = 0
LEVEL = 10
while(level < LEVEL):
address = stack1.pop()
response = requests.get("https://blockchain.info/rawaddr/"+address)
address_info = response.json()
transactions = address_info['txs']
transactions.reverse()
for i in transactions:
if(i['hash'] not in completed):
completed.append(i['hash'])
#g.add_edge(address,i['hash'],weight = i['weight'])
for j in i["inputs"]:
if j['prev_out']['addr'] not in g:
color.append('blue')
if i['hash'] not in g:
color.append('red')
edge_color.append('purple')
g.add_edge(j['prev_out']["addr"],i['hash'], weight = j['prev_out']['value']*0.00000001)
stack2.append(j['prev_out']["addr"])
for j in i["out"]:
if i['hash'] not in g:
color.append('red')
if j['addr'] not in g:
color.append('blue')
edge_color.append('green')
g.add_edge(i['hash'],j["addr"], weight = j['value']*0.00000001)
stack2.append(j['addr'])
break
if(len(stack1)==0):
stack1 = stack2
level = level +1
time.sleep(8.5)
```
## **Store the graph**
```
nx.write_edgelist(g, path = "graph.csv", delimiter=":")
plt.figure(4, figsize=(50,50))
pos = nx.spring_layout(g, center=None, dim=2)
nx.draw_networkx(g,pos=pos, node_size=99, edge_color=edge_color, node_color=color, width=2.5, with_labels=False)
#labels = nx.get_edge_attributes(g,'weight')
#nx.draw_networkx_edge_labels(g,pos)
plt.show()
plt.savefig("graph.png")
```
# **Centrality**
## **VoteRank Algorithm**
VoteRank computes a ranking of the nodes in a graph G based on a voting scheme. With VoteRank, all nodes vote for each of its in-neighbours and the node with the highest votes is elected iteratively. The voting ability of out-neighbors of elected nodes is decreased in subsequent turns.
```
node_voterank = nx.algorithms.centrality.voterank(g)
node_voterank
```
## **Percolation Centrality** (error)
Percolation centrality of a node v, at a given time, is defined as the proportion of ‘percolated paths’ that go through that node. This measure quantifies relative impact of nodes based on their topological connectivity, as well as their percolation states. Percolation states of nodes are used to depict network percolation scenarios (such as during infection transmission in a social network of individuals, spreading of computer viruses on computer networks, or transmission of disease over a network of towns) over time. In this measure usually the percolation state is expressed as a decimal between 0.0 and 1.0.
```
node_percolation = nx.algorithms.centrality.percolation_centrality(g)
node_percolation
```
## **PageRank Algorithm** (error)
PageRank is an algorithm used by Google Search to rank web pages in their search engine results. PageRank is a way of measuring the importance of website pages.
```
node_pagerank = nx.algorithms.link_analysis.pagerank_alg.pagerank(g)
node_pagerank
```
## **Degree of Nodes**
`degree_centrality(G)` : Compute the degree centrality for nodes.
`in_degree_centrality(G)` : Compute the in-degree centrality for nodes.
`out_degree_centrality(G)` : Compute the out-degree centrality for nodes.
```
node_degree = nx.degree_centrality(g)
node_deg_in = nx.in_degree_centrality(g)
node_deg_out = nx.out_degree_centrality(g)
node_degree
node_deg_in
node_deg_out
```
## **Eigenvector and Katz Centrality** (error)
Katz centrality is a generalization of degree centrality. Degree centrality measures the number of direct neighbors, and Katz centrality measures the number of all nodes that can be connected through a path, while the contributions of distant nodes are penalized.
```
node_eigen = nx.eigenvector_centrality(g)
node_eigen
node_katz = nx.katz_centrality(g)
node_katz
```
## **Closeness**
`closeness_centrality(G[, u, distance, …])` : Compute closeness centrality for nodes.
`incremental_closeness_centrality(G, edge[, …])`: Incremental closeness centrality for nodes.
Notice that higher values of closeness indicate higher centrality.
```
node_close = nx.closeness_centrality(g)
node_close
node_close_inc = nx.incremental_closeness_centrality(g, edge = ('1Ep8AVZx89qmBzzeu1zPpKLF8pxHfkZaJc', '56484b549f42a4485fb79b2838c7829805d025a28a46248eec677aaba78e4b70') )
```
## **Current Flow Closeness** (error)
Compute current-flow closeness centrality for nodes.
```
node_cur = nx.current_flow_closeness_centrality(g)
node_cur_info = nx.information_centrality(g)
```
## **Dispersion**
Calculate dispersion between u and v in G.
A link between two actors (u and v) has a high dispersion when their mutual ties (s and t) are not well connected with each other.
If u (v) is specified, returns a dictionary of nodes with dispersion score for all “target” (“source”) nodes. If neither u nor v is specified, returns a dictionary of dictionaries for all nodes ‘u’ in the graph with a dispersion score for each node ‘v’.
```
dispersion = nx.dispersion(g)
dispersion
```
## **Harmonic Centrality**
Harmonic centrality of a node u is the sum of the reciprocal of the shortest path distances from all other nodes to u.
```
harmonic = nx.harmonic_centrality(g)
harmonic
```
## **Reaching**
`local_reaching_centrality(G, v[, paths, …])` : Returns the local reaching centrality of a node in a directed graph. The local reaching centrality of a node in a directed graph is the proportion of other nodes reachable from that node
`global_reaching_centrality(G[, weight, …]`) : Returns the global reaching centrality of a directed graph. The global reaching centrality of a weighted directed graph is the average over all nodes of the difference between the local reaching centrality of the node and the greatest local reaching centrality of any node in the graph.
```
global_reach = nx.global_reaching_centrality(g)
global_reach
```
## **Second Order Centrality** (error)
The second order centrality of a given node is the standard deviation of the return times to that node of a perpetual random walk on G:
```
node_2nd_order = nx.second_order_centrality(g)
node_2nd_order
```
## **Trophic** (2 error)
`trophic_levels(G[, weight])` : Compute the trophic levels of nodes.
`trophic_differences(G[, weight])` : Compute the trophic differences of the edges of a directed graph.
`trophic_incoherence_parameter(G[, weight, …])` : Compute the trophic incoherence parameter of a graph.
```
trophic = nx.trophic_levels(g)
trophic
trophic_differences = nx.trophic_differences(g)
incoherence = nx.trophic_incoherence_parameter(g)
```
## **Subgraph** (error)
`subgraph_centrality(G)` : Returns subgraph centrality for each node in G.
`subgraph_centrality_exp(G)` : Returns the subgraph centrality for each node of G.
`estrada_index(G) `: Returns the Estrada index of a the graph G.
```
subgraph = nx.subgraph_centrality(g)
subgraph
sub_exp = nx.subgraph_centrality_exp(g)
sub_exp
estrada = nx.estrada_index(g)
```
## **Load**
The load centrality of a node is the fraction of all shortest paths that pass through that node.
```
load = nx.load_centrality(g)
load
edge_load = nx.edge_load_centrality(g)
edge_load
```
## **Group Centrality** (error)
`group_betweenness_centrality(G, C[, …])`: Compute the group betweenness centrality for a group of nodes.
`group_closeness_centrality(G, S[, weight])`: Compute the group closeness centrality for a group of nodes.
`group_degree_centrality(G, S)`: Compute the group degree centrality for a group of nodes.
`group_in_degree_centrality(G, S)`: Compute the group in-degree centrality for a group of nodes.
`group_out_degree_centrality(G, S)`: Compute the group out-degree centrality for a group of nodes.
```
gp_btw = nx.group_betweenness_centrality(g, C=g.nodes)
gp_btw
gp_close = nx.group_closeness_centrality(g,S=g.nodes)
gp_close
gp_deg = nx.group_degree_centrality(g, g.nodes)
gp_in = nx.group_in_degree_centrality(g, g.nodes)
gp_in
gp_out = nx.group_out_degree_centrality(g, g.nodes)
gp_out
```
## **Communicability Betweenness** (error)
Communicability betweenness measure makes use of the number of walks connecting every pair of nodes as the basis of a betweenness centrality measure.
```
comm_btw = nx.communicability_betweenness_centrality(g)
```
|
github_jupyter
|
### Training a Graph Convolution Model
Now that we have the data appropriately formatted, we can use this data to train a Graph Convolution model. First we need to import the necessary libraries.
```
import deepchem as dc
from deepchem.models import GraphConvModel
import numpy as np
import sys
import pandas as pd
import seaborn as sns
from rdkit.Chem import PandasTools
from tqdm.auto import tqdm
```
Now let's define a function to create a GraphConvModel. In this case we will be creating a classification model. Since we will be apply the model later on a different dataset, it's a good idea to create a directory in which to store the model.
```
def generate_graph_conv_model():
batch_size = 128
model = GraphConvModel(1, batch_size=batch_size, mode='classification', model_dir="./model_dir")
return model
```
Now we will read in the dataset that we just created.
```
dataset_file = "dude_erk2_mk01.csv"
tasks = ["is_active"]
featurizer = dc.feat.ConvMolFeaturizer()
loader = dc.data.CSVLoader(tasks=tasks, feature_field="SMILES", featurizer=featurizer)
dataset = loader.create_dataset(dataset_file, shard_size=8192)
```
Now that we have the dataset loaded, let's build a model.
We will create training and test sets to evaluate the model's performance. In this case we will use the RandomSplitter(). DeepChem offers a number of other splitters such as the ScaffoldSplitter, which will divide the dataset by chemical scaffold or the ButinaSplitter which will first cluster the data then split the dataset so that different clusters will end up in the training and test sets.
```
splitter = dc.splits.RandomSplitter()
```
With the dataset split, we can train a model on the training set and test that model on the validation set.
At this point we can define some metrics and evaluate the performance of our model. In this case our dataset is unbalanced, we have a small number of active compounds and a large number of inactive compounds. Given this difference, we need to use a metric that reflects the performance on unbalanced datasets. One metric that is apporpriate for datasets like this is the Matthews correlation coefficient (MCC). Put more info about MCC here.
```
metrics = [dc.metrics.Metric(dc.metrics.matthews_corrcoef, np.mean)]
```
In order to evaluate the performance of our moldel, we will perform 10 folds of cross valiation, where we train a model on the training set and validate on the validation set.
```
training_score_list = []
validation_score_list = []
transformers = []
cv_folds = 10
for i in tqdm(range(0,cv_folds)):
model = generate_graph_conv_model()
train_dataset, valid_dataset, test_dataset = splitter.train_valid_test_split(dataset)
model.fit(train_dataset)
train_scores = model.evaluate(train_dataset, metrics, transformers)
training_score_list.append(train_scores["mean-matthews_corrcoef"])
validation_scores = model.evaluate(valid_dataset, metrics, transformers)
validation_score_list.append(validation_scores["mean-matthews_corrcoef"])
print(training_score_list)
print(validation_score_list)
```
To visualize the preformance of our models on the training and test data, we can make boxplots of the models' performance.
```
sns.boxplot(x=["training"]*cv_folds+["validation"]*cv_folds,y=training_score_list+validation_score_list);
```
It is also useful to visualize the result of our model. In order to do this, we will generate a set of predictions for a validation set.
```
pred = [x.flatten() for x in model.predict(valid_dataset)]
pred
```
**The results of predict on a GraphConv model are returned as a list of lists. Is this the intent? It doesn't seem consistent across models. RandomForest returns a list. For convenience, we will put our predicted results into a Pandas dataframe.**
```
pred_df = pd.DataFrame(pred,columns=["neg","pos"])
```
We can easily add the activity class (1 = active, 0 = inactive) and the SMILES string for our predicted moleculesto the dataframe. __Is the moleculed id retained as part of the DeepChem dataset? I can't find it__
```
pred_df["active"] = [int(x) for x in valid_dataset.y]
pred_df["SMILES"] = valid_dataset.ids
pred_df.head()
pred_df.sort_values("pos",ascending=False).head(25)
sns.boxplot(x=pred_df.active,y=pred_df.pos)
```
The performance of our model is very good, we can see a clear separation between the active and inactive compounds. It appears that only one of our active compounds receieved a low positive score. Let's look more closely.
```
false_negative_df = pred_df.query("active == 1 & pos < 0.5").copy()
PandasTools.AddMoleculeColumnToFrame(false_negative_df,"SMILES","Mol")
false_negative_df
false_positive_df = pred_df.query("active == 0 & pos > 0.5").copy()
PandasTools.AddMoleculeColumnToFrame(false_positive_df,"SMILES","Mol")
false_positive_df
```
Now that we've evaluated our model's performance we can retrain the model on the entire dataset and save it.
```
model.fit(dataset)
```
|
github_jupyter
|
# Exercise 01 - Syntax, Variables and Numbers
Welcome to your first set of Python coding problems!
**Notebooks** are composed of blocks (called "cells") of text and code. Each of these is editable, though you'll mainly be editing the code cells to answer some questions.
To get started, try running the code cell below (by pressing the `►| Run` button, or clicking on the cell and pressing `ctrl+Enter`/`shift+Enter` on your keyboard).
```
print("You've successfully run some Python code")
print("Congratulations!")
```
Try adding another line of code in the cell above and re-running it.
Now let's get a little fancier: Add a new code cell by clicking on an existing code cell, hitting the `escape` key *(turn to command mode)*, and then hitting the `a` or `b` key.
- The `a` key will add a cell above the current cell.
- The `b` adds a cell below.
Great! Now you know how to use Notebooks.
## 0. Creating a Variable
**What is your favorite color? **
To complete this question, create a variable called `color` in the cell below with an appropriate `string` value.
```
# Create a variable called color with an appropriate value on the line below
# (Remember, strings in Python must be enclosed in 'single' or "double" quotes)
color = 'blue'
```
<hr/>
## 1. Simple Arithmetic Operation
Complete the code below. In case it's helpful, here is the table of available arithmatic operations:
| Operator | Name | Description |
|--------------|----------------|--------------------------------------------------------|
| ``a + b`` | Addition | Sum of ``a`` and ``b`` |
| ``a - b`` | Subtraction | Difference of ``a`` and ``b`` |
| ``a * b`` | Multiplication | Product of ``a`` and ``b`` |
| ``a / b`` | True division | Quotient of ``a`` and ``b`` |
| ``a // b`` | Floor division | Quotient of ``a`` and ``b``, removing fractional parts |
| ``a % b`` | Modulus | Integer remainder after division of ``a`` by ``b`` |
| ``a ** b`` | Exponentiation | ``a`` raised to the power of ``b`` |
| ``-a`` | Negation | The negative of ``a`` |
<span style="display:none"></span>
```
pi = 3.14159 # approximate
diameter = 3
# Create a variable called 'radius' equal to half the diameter
radius = 3/2
# Create a variable called 'area', using the formula for the area of a circle: pi times the radius squared
area = pi * radius**2
area
```
**Results**:
- Area = 7.0685775
## 2. Variable Reassignment
Add code to the following cell to swap variables `a` and `b` (so that `a` refers to the object previously referred to by `b` and vice versa).
```
# If you're curious, these are examples of lists. We'll talk about
# them in depth a few lessons from now. For now, just know that they're
# yet another type of Python object, like int or float.
a = [1, 2, 3]
b = [3, 2, 1]
######################################################################
# Your code goes here. Swap the values to which a and b refer.
# Hint: Try using a third variable
c = b
b = a
a = c
print('a = ', a)
print('b = ', b)
```
## 3. Order of Operations
a) Add parentheses to the following expression so that it evaluates to 1.
*Hint*: Following its default "**PEMDAS**"-like rules for order of operations, Python will first divide 3 by 2, then subtract the result from 5. You need to add parentheses to force it to perform the subtraction first.
```
(5 - 3) // 2
```
<small>Questions, like this one, marked a spicy pepper are a bit harder. Don't feel bad if you can't get these.</small>
b) <span title="A bit spicy" style="color: darkgreen ">🌶️</span> Add parentheses to the following expression so that it evaluates to **0**.
```
8 - (3 * 2) - (1 + 1)
```
## 4. Your Turn
Alice, Bob and Carol have agreed to pool their Halloween candies and split it evenly among themselves.
For the sake of their friendship, any candies left over will be smashed. For example, if they collectively
bring home 91 candies, they'll take 30 each and smash 1.
Write an arithmetic expression below to calculate how many candies they must smash for a given haul.
> *Hint*: You'll probably want to use the modulo operator, `%`, to obtain the remainder of division.
```
# Variables representing the number of candies collected by Alice, Bob, and Carol
alice_candies = 121
bob_candies = 77
carol_candies = 109
# Your code goes here! Replace the right-hand side of this assignment with an expression
candies_smash = (alice_candies + bob_candies + carol_candies) % 3
candies_smash
# involving alice_candies, bob_candies, and carol_candies
#to_smash = -1
```
# Keep Going 💪
|
github_jupyter
|
<a href="https://colab.research.google.com/github/semishen/DL-CVMarathon/blob/master/Day015_Cifar_HW.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## 『本次練習內容』
#### 運用這幾天所學觀念搭建一個CNN分類器
## 『本次練習目的』
#### 熟悉CNN分類器搭建步驟與原理
#### 學員們可以嘗試不同搭法,如使用不同的Maxpooling層,用GlobalAveragePooling取代Flatten等等
```
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import BatchNormalization
from keras.layers import Activation
import keras.utils
from keras.datasets import cifar10
import numpy as np
import matplotlib.pyplot as plt
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print(x_train.shape) #(50000, 32, 32, 3)
## Normalize Data
def normalize(X_train,X_test):
mean = np.mean(X_train,axis=(0,1,2,3))
std = np.std(X_train, axis=(0,1,2,3))
X_train = (X_train-mean)/(std+1e-7)
X_test = (X_test-mean)/(std+1e-7)
return X_train, X_test, mean, std
## Normalize Training and Testset
x_train, x_test, mean_train, std_train = normalize(x_train, x_test)
# ## OneHot Label 由(None, 1)-(None, 10)
# ## ex. label=2,變成[0,0,1,0,0,0,0,0,0,0]
# one_hot=OneHotEncoder()
# y_train=one_hot.fit_transform(y_train).toarray()
# y_test=one_hot.transform(y_test).toarray()
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
classifier=Sequential()
#卷積組合
classifier.add(Convolution2D(kernel_size=(3,3), filters=16, input_shape=(32,32,3)))#32,3,3,input_shape=(32,32,3),activation='relu''
classifier.add(BatchNormalization())
classifier.add(Activation('relu'))
'''自己決定MaxPooling2D放在哪裡'''
classifier.add(MaxPooling2D(pool_size=(2,2)))
#卷積組合
classifier.add(Convolution2D(kernel_size=(3,3), filters=32))
classifier.add(BatchNormalization())
classifier.add(Activation('relu'))
classifier.add(MaxPooling2D(pool_size=(2,2)))
#flatten
classifier.add(Flatten())
#FC
classifier.add(Dense(100)) #output_dim=100,activation=relu
classifier.add(Activation('relu'))
#輸出
classifier.add(Dense(units=10,activation='softmax'))
classifier.summary()
#超過兩個就要選categorical_crossentrophy
classifier.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
classifier.fit(x_train,y_train,batch_size=100,epochs=100)
```
## 預測新圖片,輸入影像前處理要與訓練時相同
#### ((X-mean)/(std+1e-7) ):這裡的mean跟std是訓練集的
## 維度如下方示範
```
index = 50
input_example = x_test[index].reshape(1,32,32,3)
# input_example.shape
input_example=(input_example-mean_train)/(std_train+1e-7)
print(classifier.predict(input_example))
print(y_test[index])
```
|
github_jupyter
|
# Transmission
```
%matplotlib inline
import numpy as np
np.seterr(divide='ignore') # Ignore divide by zero in log plots
from scipy import signal
import scipy.signal
from numpy.fft import fft, fftfreq
import matplotlib.pyplot as plt
#import skrf as rf # pip install scikit-rf if you want to run this one
```
First, let's set up a traditional, full-precision modulator and plot the spectrum of that as a baseline
```
def prbs(n=0, taps=[]):
state = [1]*n
shift = lambda s: [sum([s[i] for i in taps]) % 2] + s[0:-1]
out = []
for i in range(2**n - 1):
out.append(state[-1])
state = shift(state)
return out
prbs9 = lambda: prbs(n=9, taps=[4,8])
def make_carrier(freq=None, sample_rate=None, samples=None, phase=0):
t = (1/sample_rate)*np.arange(samples)
return np.real(np.exp(1j*(2*np.pi*freq*t - phase)))
def modulate_gmsk(bits, carrier_freq=2.402e9, sample_rate=5e9, baseband=False, phase_offset=0, include_phase=False):
symbol_rate = 1e6 # 1Mhz
BT = 0.5
bw = symbol_rate*BT/sample_rate
samples_per_symbol = int(sample_rate/symbol_rate)
# This looks scary but it's just a traditional gaussian distribution from wikipedia
kernel = np.array([(np.sqrt(2*np.pi/np.log(2))*bw)*np.exp(-(2/np.log(2))*np.power(np.pi*t*bw, 2)) for t in range(-5000,5000)])
kernel /= sum(kernel) # Normalize so things amplitude after convolution remains the same
rotation = np.repeat(bits, sample_rate/symbol_rate)*2.0 - 1.0
smoothed_rotation = np.convolve(rotation, kernel,mode='same')
angle_per_sample = (np.pi/2.0)/(samples_per_symbol)
current_angle = phase_offset
modulated = np.zeros((len(smoothed_rotation),), dtype=np.complex64) # Represents I and Q as a complex number
i = 0
for bit in smoothed_rotation:
current_angle += angle_per_sample*bit
modulated[i] = np.exp(1j*current_angle)
i += 1
if baseband:
return modulated
I = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(modulated), phase=0)
Q = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(modulated), phase=np.pi/2)
if include_phase:
return np.real(modulated)*I + np.imag(modulated)*Q, np.angle(modulated)
return np.real(modulated)*I + np.imag(modulated)*Q
```
Now let's look at the FFT of this...
```
sample_rate = 6e9
modulated = modulate_gmsk(prbs9(), sample_rate=sample_rate)
fftm = np.abs(fft(modulated))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(modulated), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.gca().set_ylim(-40, 0)
```
This is clean (as one would expect), now let's see what happens if we reduce things to 1-bit of precision by just rounding
# The Naive Approach (Rounding)
```
sample_rate=5e9
modulates_5g = modulated = np.sign(modulate_gmsk(prbs9(), sample_rate=sample_rate))
fftm = np.abs(fft(modulated))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(modulated), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
```
_Oof_ this is not pretty. What's happening here is that (I think) the aliases are mixing with each other to produce these interference paterns. In this case, it looks like the big subharmonics are spaced about 200Mhz which makes sense given the alias of 2.402ghz at 2.698ghz when sampling at 2.5ghz.
```
sample_rate = 6e9
modulated_6g = modulated = np.sign(modulate_gmsk(prbs9(), sample_rate=sample_rate))
fftm = np.abs(fft(modulated))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(modulated), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
plt.title("Unfiltered")
```
Ok, in this case, the alias is at `3 + (3 - 2.402) = 3.6ghz`. The difference between this and 2.402ghz is about 1.2ghz, which looking at the next big peak, looks to be about 1.2ghz, so this makes sense. From this math, we can intuit that it's a good idea for the sample rate to be a whole number multiple of the carrier frequency. In the ideal case, 4 times the carrier:
```
sample_rate = 2.402e9*4
modulated_4x = modulated = np.sign(modulate_gmsk(prbs9(), sample_rate=sample_rate))
fftm = np.abs(fft(modulated))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(modulated), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
```
There a couple of challenges here, however:
1. In order to get the clean(ish) spectrum, we have to clock the output frequency at a rate relative to the carrier frequency. If we only intended to use one frequency, this would be fine but Bluetooth (as an example) hops around frequency constantly by design. This might be doable, but it's kind of painful (this might require various SERDES resets which aren't instantaneous)
2. At 2.402ghz, 4x this would be... 9.6ghz, which is too fast for my (low-end-ish) SERDES which maxes out around 6ghz.
# Adding a Reconstruction Filter
In order to prevent a friendly visit from an unmarked FCC van, it's more or less mandatory that we filter noise outside of the band of interest. In our case, I have a tiny 2.4Ghz surface mount band pass filter that I've put onto a test board. This is the delightfully named "DEA252450BT-2027A1" which is a surface mount part which has a frequency response of:

To (more fully) characterize this filter, I hooked it up to a NanoVNA2 and saved its S parameters using a NanoVNA Saver:
```
# pip install scikit-rf if you want to run this one
# Note: running this before we've plotted anything, borks matplotlib
import skrf as rf
filter2_4 = rf.Network('2_4ghzfilter.s2p')
filter2_4.s21.plot_s_db()
```
Hey that's not too far off from data sheet (at least up to 4.4Ghz).
To turn this into a filter, we can use the scipy-rf to compute an impulse response which we can then convolve with our input data to see what the filtered output would be:
```
ts, ms = filter2_4.s21.impulse_response()
impulse_response = ms[list(ts).index(0):]
impulse_response = impulse_response/np.max(impulse_response)
tstep = ts[1] - ts[0]
print("Timestep {} seconds, frequency {:e} hz".format(tstep, 1/tstep))
plt.plot(impulse_response)
plt.gca().set_xlim(0, 300)
```
This is great and all but the impulse response is sampled at north of 30ghz (!). Our output serdes runs at around 6ghz so let's resample this to that rate
```
# Truncate the impulse response so we can get relatively close to 6ghz
trunc = impulse_response[:-4]
size = int((tstep*(len(trunc) - 1))/(1/6e9) + 1)
print(size)
impulse_response_6g = scipy.signal.resample(impulse_response, size)
plt.plot(impulse_response_6g)
plt.gca().set_xlim(0, 50)
```
Not quite as pretty, but it's what we need. Let's verify that this does "the thing" by filtering our 6ghz signal:
```
sample_rate=6e9
fftm = np.abs(fft(np.convolve(modulated_6g, impulse_response_6g, mode="same")))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(fftm), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.grid(b=True)
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
```
This looks better, but the passband for my filter is still super wide (hundreds of MHz, not surprising for a 50c filter, I should look at B39242B9413K610 which is a $1 surface acoustic wave filter). We see some nontrivial imaging up to -12db, which is... not great.
What to do?
# Delta Sigma Modulation
A way around this is to use something called Delta Sigma Modulation. The way to think about this conceptually is that we keep a running sum of values we've output (think of this as the error) and factor this into the value we decide to output (versus just blindly rounding the current value). Further, you can filter this feedback loop to "shape" the noise to different parts of the spectrum (that we can filter out elsewhere).
A good place to read about this is [Wikipedia](https://en.wikipedia.org/wiki/Delta-sigma_modulation#Oversampling). In [Novel Architectures for Flexible and Wideband All-digital Transmitters](https://ria.ua.pt/bitstream/10773/23875/1/Documento.pdf) by Rui Fiel Cordeiro, Rui proposes using a filter that has a zero at the carrier of interest, which looks like the following
```
def pwm2(sig, k=1.0):
z1 = 0.0
z2 = 0.0
out = np.zeros((len(sig,)))
for i in range(len(sig)):
v = sig[i] - (k*z1 + z2)
out[i] = np.sign(v)
z2 = z1
z1 = v - out[i]
return out
```
To be clear, `pwm2` is replacing `np.sign`
```
sample_rate = 6e9
modulated = modulate_gmsk(prbs9(), sample_rate=sample_rate)
modulatedsd5 = modulated = pwm2(modulated, k=-2.0*np.cos(2.0*np.pi*2.402e9/sample_rate))
fftm = np.abs(fft(np.sign(modulated)))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(modulated), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
plt.title("Second order Delta Sigma Modulation")
```
Now let's filter this with our output filter
```
fftm = np.abs(fft(np.convolve(modulatedsd5, impulse_response_6g, mode="same")))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(fftm), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.grid(b=True)
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
plt.title("Filtered Second Order Delta Sigma Modulation")
```
This is better in the immediate vicinity of our signal.
You'll notice on the wikipedia page that we can use increasing filter orders to increase the steepness of the valley around our signal of interest.
On one hand this is good, but because our filter is not very good (tm) this actually results in higher peaks than we'd like at around 2.2ghz.
Given that our filter is... not that good, can we design the filter in the modulator to compliment it?
# Filter-Aware Sigma Delta Modulator
I lay no claim to this awesome work by the folks who wrote pydsm, but it's great -- feed it an impulse response for a reconstruction filter and it will optimize a noise transfer function that matches it:
```
from pydsm.ir import impulse_response
from pydsm.delsig import synthesizeNTF, simulateDSM, evalTF
from pydsm.delsig import dbv, dbp
from pydsm.NTFdesign import quantization_noise_gain
from pydsm.NTFdesign.legacy import q0_from_filter_ir
from pydsm.NTFdesign.weighting import ntf_fir_from_q0
H_inf = 1.6 # This is out of band noise level in dB
q0 = q0_from_filter_ir(51, impulse_response_6g) # 51 is the number of filter coefficients
ntf_opti = ntf_fir_from_q0(q0, H_inf=H_inf)
```
Let's see how well we did. Anecdotally, this is not a _great_ solution (likely constrained by the low oversampling) but I'd wager this is because the oversampling rate is super low.
```
# Take the frequency response
samples = filter2_4.s21.s_db[:,0,0]
# Normalize the samples
ff = filter2_4.f/6e9
# Compute frequency response data
resp_opti = evalTF(ntf_opti, np.exp(1j*2*np.pi*ff))
# Plot the output filter,
plt.figure()
plt.plot(ff*6e9, dbv(resp_opti), 'r', label="Optimal NTF")
plt.plot(ff*6e9, samples, 'b', label="External Filter")
plt.plot(ff*6e9, dbv(resp_opti) + samples, 'g', label="Resulting Noise Shape")
plt.gca().set_xlim(0, 3e9)
plt.legend(loc="lower right")
plt.suptitle("Output filter and NTFs")
```
Ok, so it's not amazing but definitely an improvement. But now that we've got this monstrous 49 coefficient NTF, how do we modulate with it?
Fortunately we have the pydsm to the rescue!
```
sample_rate = 6e9
modulated = modulate_gmsk(prbs9(), sample_rate=sample_rate)
xx_opti = simulateDSM(modulated, ntf_opti)
fftm = np.abs(fft(np.convolve(xx_opti[0], impulse_response_6g, mode="same")))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(fftm), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.grid(b=True)
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
```
Ok, so we've basically "filled in the valley" with the peaks from eithe sides. We've cut the max spurs down by about 3db. Not amazing, but not bad!
After looking around at SAW filters I realized how impressive they can be in this frequency rance, so I tried to order one (CBPFS-2441) and try with that. Unfortunately, the datasheets only show _drawing_ of parameters (and only phase) and actual s2p files are impossible to find. This seems dumb. Nevertheless, https://apps.automeris.io/wpd/ exists which allow you to extimate a graph from an image.
```
import csv
from scipy.interpolate import interp1d
traced = np.array([(float(f), float(d)) for f,d in csv.reader(open('saw_filter_traced.csv'))])
# Interpolate to 600 equally spaced points (this means 1200 total, so 1200 * 5MHz -> 6GHz sampling rate)
x = traced[:,0]
y = -1*traced[:,1]
f = interp1d(x, y)
x = np.array(np.linspace(5, 3000, 600))
y = np.array(f(x))
x = np.concatenate((np.flip(x)*-1, np.array([0]), x))
# In FFT format
y_orig = 10**(np.concatenate((np.array([-70]), y, np.flip(y)))/10)
y = 10**(np.concatenate((np.flip(y), np.array([-70]), y))/10.0)
plt.plot(x, 10*np.log10(y))
```
Let's look at the impulse respponse quickly
```
impulse = np.fft.ifft(y_orig)
impulse_trunc = impulse[:300]
plt.plot(np.real(impulse_trunc))
```
**Update:** The filter finally arrived and I can characterize it, as shown below...
(the remaining code uses the measured filter response rather than the one trace from the image)
```
sawfilter = rf.Network('crysteksawfilter.s2p')
sawfilter.s21.plot_s_db()
filter2_4.s21.plot_s_db()
ts, ms = sawfilter.s21.impulse_response()
impulse_response = ms[list(ts).index(0):]
impulse_response = impulse_response/np.max(impulse_response)
tstep = ts[1] - ts[0]
print("Timestep {} seconds, frequency {:e} hz".format(tstep, 1/tstep))
plt.plot(impulse_response)
plt.gca().set_xlim(0, 600)
plt.show()
trunc = impulse_response[:-2]
size = int((tstep*(len(trunc) - 1))/(1/6e9) + 1)
print(size)
impulse_response_6g = scipy.signal.resample(impulse_response, size)
plt.plot(impulse_response_6g)
plt.gca().set_xlim(0, 400)
```
Wow that is a fair bit sharper.
```
H_inf = 1.5
q0 = q0_from_filter_ir(49, np.real(impulse_response_6g))
ntf_opti = ntf_fir_from_q0(q0, H_inf=H_inf)
# Take the frequency response
#samples = 10*np.log10(y)
# Normalize the samples
#ff = x*1e6/6e9
# Take the frequency response
samples = sawfilter.s21.s_db[:,0,0]
# Normalize the samples
ff = sawfilter.f/6e9
# Compute frequency response data
resp_opti = evalTF(ntf_opti, np.exp(1j*2*np.pi*ff))
# Plot the output filter,
plt.figure()
plt.plot(ff*6e9, dbv(resp_opti), 'r', label="Optimal NTF")
plt.plot(ff*6e9, samples, 'b', label="External Filter")
plt.plot(ff*6e9, dbv(resp_opti) + samples, 'g', label="Resulting Noise Shape")
plt.gca().set_xlim(0, 3e9)
plt.legend(loc="lower left")
plt.suptitle("Output filter and NTFs")
sample_rate = 6e9
modulated = modulate_gmsk(prbs9(), sample_rate=sample_rate)
xx_opti = simulateDSM(modulated, ntf_opti)
fftm = np.abs(fft(np.convolve(xx_opti[0], impulse_response_6g, mode="same")))
fftm = fftm/np.max(fftm)
fftbins = fftfreq(len(fftm), 1/sample_rate)
plt.figure(figsize=(10,6))
plt.plot(fftbins, 10*np.log10(fftm))
plt.grid(b=True)
plt.gca().set_ylim(-40, 0)
plt.gca().set_xlim(0, 3e9)
plt.title("Optimized Filtered Output")
```
Wow, the baseline noise level has dropped by almost 10db! Impressive!
# Symbol Dictionaries
Now that we've figured out how much noise we can stifle with this setup, we can begin to design our transmitter.
Now you may notice that the above noise transfer function filter is... quite expensive, clocking in at 51 coefficients. While we might be able to implement this on our FPGA, a better question is -- can we avoid it?
Given that we're transmitting digital data with a finite number of symbols, it turns out we can just pre-compute the symbols, store them in a dictionary and then play back the relevant pre-processed symbol when we need to transmit a given symbol. Simple!
Except, GMSK is not _quite_ that simple in this context because not only do we have to consider 1s and 0s but also where we currently are on a phase plot. If you think about GMSK visually on a constellation diagram, one symbols is recomended by a 90 degree arc on a unit circle that is either moving clockwise or counter clockwise. This is futher complicated by the fact that the gaussian smoothing, makes the velocity of the arc potentially slow down if the next bit is different from the current bit (because it needs to gradually change direction).
The result of this (if you enumerate out all the computations) is that we actually end up with a 32-symbol table. This is not the _only_ way to simplify these symbols, nor the most efficient, but it's simplest from an implementation perspective. I spent some time figuring out a train of bits that would iterate through each symbol. I'm sur ethere's a more optimal pattern, but efficiency is not hugely important when we only need to run this once when precomputing.
```
carrier_freq = 2.402e9
sample_rate = 6e9
symbol_rate = 1e6
samples_per_symbol = int(sample_rate/symbol_rate)
# Used to test that we've mapped things correctly.
# Note that this returns the phase angle, not the output bits
def demodulate_gmsk(sig, phase_offset=0):
I = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(sig), phase=0 + phase_offset)
Q = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(sig), phase=np.pi/2 + phase_offset)
# Mix down to (complex) baseband
down = sig*I + 1j*sig*Q
# Create a low pass filter at the symbol rate
sos = signal.butter(5, symbol_rate, 'low', fs=sample_rate, output='sos')
filtered_down = signal.sosfilt(sos, down)
# Take the phase angle of the baseband
return np.angle(filtered_down)
# The sequence of bits to modulate
seq = [0, 0, 0, 1, 1, 1,
0, 0, 1, 0, 1, 1,
0, 0,
1, 0, 1, 0, 1, 0,
0, 1,
1, 0, 1, 0, 0, 0]
# The relevant samples to pull out and store in the dictionary
samples = np.array([1, 4, 7, 10, 14, 17, 22, 25])
fig, axs = plt.subplots(4, 8, sharey=True, figsize=(24, 12))
dictionary = np.zeros((4*8, samples_per_symbol))
for q in range(4):
current_angle = [0, np.pi/2, np.pi, np.pi*3/2][q]
# Modulate the symbol with out optimized delta-sigma-modulator
modulated, angle = modulate_gmsk(seq, phase_offset=current_angle, sample_rate=sample_rate, include_phase=True)
modulated = simulateDSM(modulated, ntf_opti)[0]
demodulated = demodulate_gmsk(modulated, phase_offset=0)
n = 0
for i in samples:
iqsymbol = modulated[samples_per_symbol*i:samples_per_symbol*(i+1)]
dictionary[q*8 + n,:] = iqsymbol
axs[q, n].plot(np.unwrap(angle[samples_per_symbol*i:samples_per_symbol*(i+1)]))
n += 1
```
With these established, let's concatenate a few symbols together, demodulate to phase angle and make sure things look nice and smooth
```
def sim(out):
carrier=2.402e9
I = make_carrier(freq=carrier, sample_rate=sample_rate, samples=len(out), phase=0)
Q = make_carrier(freq=carrier, sample_rate=sample_rate, samples=len(out), phase=np.pi/2)
sos = signal.butter(2, symbol_rate, 'low', fs=sample_rate, output='sos')
rx_baseband = signal.sosfilt(sos, out*I + 1j*out*Q)
plt.plot(np.angle(rx_baseband))
sim(np.concatenate((dictionary[4,:], dictionary[5,:], dictionary[4,:], dictionary[5,:])))
sim(-1.0*np.concatenate((dictionary[13,:], dictionary[12,:], dictionary[13,:], dictionary[12,:])))
sim(np.concatenate((dictionary[21,:], dictionary[20,:], dictionary[21,:], dictionary[20,:])))
sim(-1.0*np.concatenate((dictionary[28,:], dictionary[29,:], dictionary[28,:], dictionary[29,:])))
```
Now, in order to synthesize this, we need a bit more logic to map between a bit stream and its respective symbols.
Note that there is additional state (i.e. the current phase offset) that factors into the symbol encoding beyond just the symbol value itself, which makes things a bit more complicate than most other forms of simple modulation. The code below keeps track of the starting phase angle at a given symbol as well as the before and after symbols to then output the right symbol.
```
idx = {
'000': 0,
'111': 1,
'001': 2,
'011': 3,
'010': 4,
'101': 5,
'110': 6,
'100': 7
}
start_q = [
[3, 2, 3, 2, 2, 3, 2, 3],
[0, 3, 0, 3, 3, 0, 3, 0],
[1, 0, 1, 0, 0, 1, 0, 1],
[2, 1, 2, 1, 1, 2, 1, 2]
]
def encode(bitstream):
out = np.zeros((len(bitstream)*samples_per_symbol,))
q = 0
prev = bitstream[0]
bitstream = bitstream + [bitstream[-1]] # Pad at the end so we can do a lookup
syms = []
for i in range(len(bitstream) - 1):
n = idx[str(prev) + str(bitstream[i]) + str(bitstream[i+1])]
d = -1
for j in range(4):
if start_q[j][n] == q:
d = j*8 + n
assert d != -1
syms.append(d)
out[i*samples_per_symbol:(i+1)*samples_per_symbol] = dictionary[d]
if bitstream[i]:
q = (q + 1) % 4
else:
q = (q + 4 - 1) % 4
prev = bitstream[i]
return out, syms
# Whitened bits from elsewhere
wbits = [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1]
out, syms = encode([1 - b for b in wbits])
# Let's look at the resulting symbol indexes
print(syms)
```
As a reminder, the dictionary is really just one bit of precision:
```
dictionary[0][:100]
```
Let's demodulate the encoded bits to check that things make sense (note that the filtering will delay the output a bit in time, but it demodulates correctly)
```
def demodulate_gmsk(sig):
carrier_freq=2.402e9
I = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(sig), phase=0)
Q = make_carrier(freq=carrier_freq, sample_rate=sample_rate, samples=len(sig), phase=np.pi/2)
# Mix down to (complex) baseband
down = sig*I + 1j*sig*Q
# Create a low pass filter at the symbol rate
sos = signal.butter(5, symbol_rate, 'low', fs=sample_rate, output='sos')
filtered_down = signal.sosfilt(sos, down)
# Take the phase angle of the baseband
angle = np.unwrap(np.angle(filtered_down))
# Take the derivative of the phase angle and hard limit it to 1:-1
return -(np.sign(angle[1:] - angle[:-1]) + 1.0)/2.0
plt.figure(figsize=(40,3))
plt.plot(demodulate_gmsk(out))
plt.plot(np.repeat(wbits, int(sample_rate/1e6)) + 1.5)
plt.gca().set_xlim(0, 0.6e6)
fftout = np.abs(fft(out))
fftout = fftout/np.max(fftout)
plt.figure(figsize=(10,6))
plt.plot(fftfreq(len(out), d=1/sample_rate), 10*np.log(fftout))
plt.gca().set_xlim(0, 3e9)
plt.gca().set_ylim(-80, 0)
plt.title("BLE Packet Before Reconstruction Filter")
plt.show()
fftm = np.abs(fft(np.convolve(out, impulse_response_6g, mode="same")))
fftm = fftm/np.max(fftm)
plt.figure(figsize=(10,6))
plt.plot(fftfreq(len(out), d=1/sample_rate), 10*np.log(fftm))
plt.gca().set_xlim(0, 3e9)
plt.gca().set_ylim(-80, 0)
plt.title("BLE Packet After Reconstruction Filter")
plt.show()
plt.figure(figsize=(10,6))
plt.plot(fftfreq(len(out), d=1/sample_rate), 10*np.log(fftm))
plt.gca().set_xlim(2.402e9 - 5e6, 2.402e9 + 5e6)
plt.gca().set_ylim(-80, 0)
plt.title("BLE Packet After Reconstruction Filter (10MHz span)")
```
The library used to generate the NTF filter uses a copyleft license, so rather than integrate that into the code, we save out the resulting symbol waveforms and use those directly.
```
np.save('../data/gmsk_2402e6_6e9.npy', dictionary)
```
|
github_jupyter
|
```
%matplotlib inline
import numpy as np
from scipy.sparse.linalg import spsolve
from scipy.sparse import csr_matrix
import matplotlib.pyplot as plt
import seaborn as sns
from condlib import conductance_matrix_READ
from timeit import default_timer as timer
# Memory array parameters
rL = 12
rHRS = 1e6
rPU = 1e3
n = 16
vRead = [0.5, 1.0, 1.6, 2.0, 2.5, 3.0, 4.0]
hubList = []
lsbList = []
WLvoltagesList = []
BLvoltagesList = []
cellVoltagesList = []
mask = np.ones((n, n), dtype=bool)
mask[n-1][n-1] = False
for v in vRead:
# Voltages for BLs and WLs (read voltages, unselected floating)
vBLsel = 0.0
vWLsel = v
start_t = timer()
# Create conductance matrix
conductancematrix, iinvector = conductance_matrix_READ(n, rL, rHRS, rPU,
vWLsel, vBLsel,
isel=n-1, jsel=n-1, verbose=False)
# Convert to sparse matrix (CSR)
conductancematrix = csr_matrix(conductancematrix)
# Solve
voltages = spsolve(conductancematrix, iinvector)
stop_t = timer()
# Separate WL and BL nodes and calculate cell voltages
WLvoltages = voltages[:n*n].reshape((n, n))
BLvoltages = voltages[n*n:].reshape((n, n))
WLvoltagesList.append(WLvoltages)
BLvoltagesList.append(BLvoltages)
cellVoltages = abs(BLvoltages - WLvoltages)
cellVoltagesList.append(cellVoltages)
# Calculate Highest Unselected Bit and Lowest Selected Bit
hub = np.max(cellVoltages[mask])
lsb = cellVoltages[n-1][n-1]
hubList.append(hub)
lsbList.append(lsb)
print "{:.4f} sec".format(stop_t - start_t)
print "Write voltage : {:.4f} V".format(v)
print "Highest unselected bit : {:.4f} V".format(hub)
print "Lowest selected bit : {:.4f} V".format(lsb)
if n < 9:
sns.heatmap(WLvoltagesList[2], square=True)
else:
sns.heatmap(WLvoltagesList[2], square=True, xticklabels=n/8, yticklabels=n/8)
plt.savefig("figures/read_mapWL_{}.png".format(n), dpi=300)
if n < 9:
sns.heatmap(BLvoltagesList[2], square=True)
else:
sns.heatmap(BLvoltagesList[2], square=True, xticklabels=n/8, yticklabels=n/8)
plt.savefig("figures/read_mapBL_{}.png".format(n), dpi=300, figsize=(10,10))
if n < 9:
sns.heatmap(cellVoltagesList[2], square=True)
else:
sns.heatmap(cellVoltagesList[2], square=True, xticklabels=n/8, yticklabels=n/8)
plt.savefig("figures/read_mapCell_{}.png".format(n), dpi=300, figsize=(10,10))
plt.plot(vRead, hubList, vRead, lsbList)
plt.plot([0.5, 4], [1.1, 1.1], [0.5, 4], [2.2, 2.2], c='gray', ls='--')
plt.plot([0.5, 4], [1.2, 1.2], c='gray', ls='--')
plt.xlim([0,4.5])
plt.ylim([0,4.5])
plt.ylabel("Vcell")
plt.xlabel("Vread")
plt.savefig("figures/read_margin_{}.png".format(n), dpi=300, figsize=(10,12))
plt.show()
# Find window
windowlsb = np.interp([1.2, 2.2], lsbList, vRead)
windowhub = np.interp(1.1, hubList, vRead)
print windowlsb
print windowhub
# Output data to csv
np.savetxt("data/read_margin_{}.csv".format(n),
np.vstack((vRead, lsbList, hubList)).T,
delimiter=',',
header="Vread,VcellLSB,VcellHUB",
footer=",WindowLSB = {} - {}, WindowHSB < {}".format(windowlsb[0], windowlsb[1], windowhub),
comments='')
np.savetxt("data/read_mapCell_{}.csv".format(n),
cellVoltagesList[2],
delimiter=',')
np.savetxt("data/read_mapWL_{}.csv".format(n),
WLvoltagesList[2],
delimiter=',')
np.savetxt("data/read_mapBL_{}.csv".format(n),
BLvoltagesList[2],
delimiter=',')
```
|
github_jupyter
|
# Pre-processing pipeline for spikeglx sessions, zebra finch
Bird z_w12m7_20
- For every run in the session:
- Load the recordings
- Extract wav chan with micrhopohone and make a wav chan with the nidq syn signal
- Get the sync events for the nidq sync channel
- Do bout detection
In another notebook, bout detection is curated
- Left to decide where to:
- Sort spikes
- Sync the spikes/lfp/nidq
- make and plot 'bout rasters'
```
%matplotlib inline
import os
import glob
import logging
import pickle
import numpy as np
import pandas as pd
from scipy.io import wavfile
from scipy import signal
import traceback
import warnings
from matplotlib import pyplot as plt
from importlib import reload
logger = logging.getLogger()
handler = logging.StreamHandler()
formatter = logging.Formatter(
'%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
from ceciestunepipe.file import bcistructure as et
from ceciestunepipe.util import sglxutil as sglu
from ceciestunepipe.util import rigutil as ru
from ceciestunepipe.util import wavutil as wu
from ceciestunepipe.util import syncutil as su
from ceciestunepipe.util.sound import boutsearch as bs
from ceciestunepipe.util.spikeextractors import preprocess as pre
from ceciestunepipe.util.spikeextractors.extractors.spikeglxrecordingextractor import readSGLX as rsgl
from ceciestunepipe.util.spikeextractors.extractors.spikeglxrecordingextractor import spikeglxrecordingextractor as sglex
import spikeinterface as si
import spikeinterface.extractors as se
import spikeinterface.toolkit as st
import spikeinterface.sorters as ss
import spikeinterface.comparison as sc
import spikeinterface.widgets as sw
logger.info('all modules loaded')
```
## Session parameters and raw files
#### list all the sessions for this bird
```
bird = 'z_w12m7_20'
all_bird_sess = et.list_sessions(bird)
logger.info('all sessions for bird are {}'.format(all_bird_sess))
```
### set up bird and sessions parameters
this will define:
- locations of files (for the bird)
- signals and channels to look for in the metadata of the files and in the rig.json parameter file: Note that this have to exist in all of the sessions that will be processed
- 'sess' is unimportant here, but it comes handy if there is need to debug usin a single session
```
reload(et)
# for one example session
sess_par = {'bird': 'z_w12m7_20',
'sess': '2020-11-04',
'probes': ['probe_0'], #probes of interest
'mic_list': ['microphone_0'], #list of mics of interest, by signal name in rig.json
'sort': 4, #label for this sort instance
}
exp_struct = et.get_exp_struct(sess_par['bird'], sess_par['sess'], sort=sess_par['sort'])
ksort_folder = exp_struct['folders']['ksort']
raw_folder = exp_struct['folders']['sglx']
```
list all the epochs in a session, to check that it is finding what it has to find
```
sess_epochs = et.list_sgl_epochs(sess_par)
sess_epochs
```
#### define pre-processing steps for each epoch and for the session
```
def preprocess_run(sess_par, exp_struct, epoch):
# get the recordings
logger.info('PREPROCESSING sess {} | epoch {}'.format(sess_par['sess'], epoch))
logger.info('getting extractors')
sgl_exp_struct = et.sgl_struct(sess_par, epoch)
run_recs_dict, run_meta_files, files_pd, rig_dict = pre.load_sglx_recordings(sgl_exp_struct, epoch)
# get the microphone to wav
# get the chans
mic_list = sess_par['mic_list']
logger.info('Getting microphone channel(s) {}'.format(mic_list))
mic_stream = pre.extract_nidq_channels(sess_par, run_recs_dict, rig_dict, mic_list, chan_type='adc')
# get the sampling rate
nidq_s_f = run_recs_dict['nidq'].get_sampling_frequency()
mic_file_path = os.path.join(sgl_exp_struct['folders']['derived'], 'wav_mic.wav')
wav_s_f = wu.save_wav(mic_stream, nidq_s_f, mic_file_path)
# get the syn to wav
# get the chans
sync_list = ['sync']
logger.info('Getting sync channel(s) {}'.format(sync_list))
sync_stream = pre.extract_nidq_channels(sess_par, run_recs_dict, rig_dict, sync_list, chan_type='ttl')
sync_file_path = os.path.join(sgl_exp_struct['folders']['derived'], 'wav_sync.wav')
wav_s_f = wu.save_wav(sync_stream, nidq_s_f, sync_file_path)
logger.info('Getting sync events from the wav sync channel')
sync_ev_path = os.path.join(sgl_exp_struct['folders']['derived'], 'wav_sync_evt.npy')
wav_s_f, x_d, ttl_arr = wu.wav_to_syn(sync_file_path)
logger.info('saving sync events of the wav channel to {}'.format(sync_ev_path))
np.save(sync_ev_path, ttl_arr)
t_0_path = os.path.join(sgl_exp_struct['folders']['derived'], 'wav_t0.npy')
logger.info('saving t0 for wav channel to {}'.format(t_0_path))
np.save(t_0_path, np.arange(sync_stream.size)/wav_s_f)
#make the sync dict
syn_dict = {'s_f': wav_s_f,
't_0_path': t_0_path,
'evt_arr_path': sync_ev_path}
syn_dict_path = os.path.join(sgl_exp_struct['folders']['derived'], '{}_sync_dict.pkl'.format('wav'))
syn_dict['path'] = syn_dict_path
logger.info('saving sync dict to ' + syn_dict_path)
with open(syn_dict_path, 'wb') as pf:
pickle.dump(syn_dict, pf)
epoch_dict = {'epoch': epoch,
'files_pd': files_pd,
'recordings': run_recs_dict,
'meta': run_meta_files,
'rig': rig_dict}
return epoch_dict
one_epoch_dict = preprocess_run(sess_par, exp_struct, sess_epochs[1])
### sequentially process all runs of the sessions
def preprocess_session(sess_par: dict):
logger.info('pre-process all runs of sess ' + sess_par['sess'])
# get exp struct
sess_struct = et.get_exp_struct(sess_par['bird'], sess_par['sess'], sort=sess_par['sort'])
# list the epochs
sess_epochs = et.list_sgl_epochs(sess_par)
logger.info('found epochs: {}'.format(sess_epochs))
# preprocess all epochs
epoch_dict_list = []
for i_ep, epoch in enumerate(sess_epochs):
try:
exp_struct = et.sgl_struct(sess_par, epoch)
one_epoch_dict = preprocess_run(sess_par, exp_struct, epoch)
epoch_dict_list.append(one_epoch_dict)
except Exception as exc:
warnings.warn('Error in epoch {}'.format(epoch), UserWarning)
logger.info(traceback.format_exc)
logger.info(exc)
logger.info('Session {} epoch {} could not be preprocessed'.format(sess_par['sess'], epoch))
return epoch_dict_list
all_epoch_list = preprocess_session(sess_par)
```
## Process multiple sessions
```
sess_list = all_bird_sess
# fist implant, right hemisphere
sess_list = ['2020-11-04', '2020-11-05', '2020-11-06']
all_sess_dict = {}
for one_sess in sess_list[:]:
sess_par['sess'] = one_sess
preprocess_session(sess_par)
sess_par
# Search bouts
```
## search bouts for those sessions
```
from ceciestunepipe.util.sound import boutsearch as bs
from ceciestunepipe.util import wavutil as wu
from joblib import Parallel, delayed
import pickle
import sys
reload(bs)
def sess_file_id(f_path):
n = int(os.path.split(f_path)[1].split('-')[-1].split('.wav')[0])
return n
def get_all_day_bouts(sess_par: dict, hparams:dict, n_jobs: int=12, ephys_software='sglx',
parallel=True) -> pd.DataFrame:
logger.info('Will search for bouts through all session {}, {}'.format(sess_par['bird'], sess_par['sess']))
exp_struct = et.get_exp_struct(sess_par['bird'], sess_par['sess'], ephys_software=ephys_software)
# get all the paths to the wav files of the epochs of the day
source_folder = exp_struct['folders']['derived']
logger.info('looking for mic wav files in {}'.format(source_folder))
wav_path_list = et.get_sgl_files_epochs(source_folder, file_filter='*wav_mic.wav')
wav_path_list.sort()
logger.info('Found {} files'.format(len(wav_path_list)))
print(wav_path_list)
get_file_bouts = lambda path: bs.get_epoch_bouts(path, hparams)
# Go parallel through all the paths in the day, get a list of all the pandas dataframes for each file
if parallel:
sess_pd_list = Parallel(n_jobs=n_jobs, verbose=100, prefer='threads')(delayed(get_file_bouts)(i) for i in wav_path_list)
else:
sess_pd_list = [get_file_bouts(i) for i in wav_path_list]
#concatenate the file and return it, eventually write to a pickle
sess_bout_pd = pd.concat(sess_pd_list)
return sess_bout_pd
def save_auto_bouts(sess_bout_pd, sess_par, hparams):
exp_struct = et.get_exp_struct(sess_par['bird'], sess_par['sess'], ephys_software='bouts_sglx')
#sess_bouts_dir = os.path.join(exp_struct['folders']['derived'], 'bouts_ceciestunepipe')
sess_bouts_dir = exp_struct['folders']['derived']
sess_bouts_path = os.path.join(sess_bouts_dir, hparams['bout_auto_file'])
hparams_pickle_path = os.path.join(sess_bouts_dir, 'bout_search_params.pickle')
os.makedirs(sess_bouts_dir, exist_ok=True)
logger.info('saving bouts pandas to ' + sess_bouts_path)
sess_bout_pd.to_pickle(sess_bouts_path)
logger.info('saving bout detect parameters dict to ' + hparams_pickle_path)
with open(hparams_pickle_path, 'wb') as fh:
pickle.dump(hparams, fh)
## need to enter 'sample_rate' from the file!
hparams = {
# spectrogram
'num_freq':1024, #1024# how many channels to use in a spectrogram #
'preemphasis':0.97,
'frame_shift_ms':5, # step size for fft
'frame_length_ms':10, #128 # frame length for fft FRAME SAMPLES < NUM_FREQ!!!
'min_level_db':-55, # minimum threshold db for computing spe
'ref_level_db':110, # reference db for computing spec
#'sample_rate':None, # sample rate of your data
# spectrograms
'mel_filter': False, # should a mel filter be used?
'num_mels':1024, # how many channels to use in the mel-spectrogram
'fmin': 500, # low frequency cutoff for mel filter
'fmax': 12000, # high frequency cutoff for mel filter
# spectrogram inversion
'max_iters':200,
'griffin_lim_iters':20,
'power':1.5,
# Added for the searching
'read_wav_fun': wu.read_wav_chan, # function for loading the wav_like_stream (has to returns fs, ndarray)
'file_order_fun': sess_file_id, # function for extracting the file id within the session
'min_segment': 10, # Minimum length of supra_threshold to consider a 'syllable' (ms)
'min_silence': 2000, # Minmum distance between groups of syllables to consider separate bouts (ms)
'min_bout': 200, # min bout duration (ms)
'peak_thresh_rms': 0.55, # threshold (rms) for peak acceptance,
'thresh_rms': 0.25, # threshold for detection of syllables
'mean_syl_rms_thresh': 0.3, #threshold for acceptance of mean rms across the syllable (relative to rms of the file)
'max_bout': 120000, #exclude bouts too long
'l_p_r_thresh': 100, # threshold for n of len_ms/peaks (typycally about 2-3 syllable spans
'waveform_edges': 1000, #get number of ms before and after the edges of the bout for the waveform sample
'bout_auto_file': 'bout_auto.pickle', # extension for saving the auto found files
'bout_curated_file': 'bout_checked.pickle', #extension for manually curated files (coming soon)
}
all_sessions = sess_list[:]
#all_sessions = ['2021-07-18']
for sess in all_sessions:
sess_par['sess'] = sess
sess_bout_pd = get_all_day_bouts(sess_par, hparams, parallel=False)
save_auto_bouts(sess_bout_pd, sess_par, hparams)
sess_bouts_folder = os.path.join(exp_struct['folders']['derived'], 'bouts')
#bouts_to_wavs(sess_bout_pd, sess_par, hparams, sess_bouts_folder)
get_all_day_bouts??
sess_bout_pd.info()
np.unique(sess_bout_pd['start_ms']).size
```
# debug
## debug search_bout
```
## look for a single file
sess = sess_list[0]
exp_struct = et.get_exp_struct(sess_par['bird'], sess, ephys_software='sglx')
source_folder = exp_struct['folders']['derived']
wav_path_list = et.get_sgl_files_epochs(source_folder, file_filter='*wav_mic.wav')
wav_path_list.sort()
logger.info('Found {} files'.format(len(wav_path_list)))
print(wav_path_list)
one_file = wav_path_list[0]
reload(bs)
epoch_bout_pd, epoch_wav = bs.get_bouts_in_long_file(wav_path_list[0], hparams)
```
|
github_jupyter
|
# Telescopes: Tutorial 5
This notebook will build on the previous tutorials, showing more features of the `PsrSigSim`. Details will be given for new features, while other features have been discussed in the previous tutorial notebook. This notebook shows the details of different telescopes currently included in the `PsrSigSim`, how to call them, and how to define a user `telescope` for a simulated observation.
We again simulate precision pulsar timing data with high signal-to-noise pulse profiles in order to clearly show the input pulse profile in the final simulated data product. We note that the use of different telescopes will result in different signal strengths, as would be expected.
This example will follow previous notebook in defining all necessary classes except for `telescope`.
```
# import some useful packages
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# import the pulsar signal simulator
import psrsigsim as pss
```
## The Folded Signal
Here we will use the same `Signal` definitions that have been used in the previous tutorials. We will again simulate a 20-minute-long observation total, with subintegrations of 1 minute. The other simulation parameters will be 64 frequency channels each 12.5 MHz wide (for 800 MHz bandwidth).
We will simulate a real pulsar, J1713+0747, as we have a premade profile for this pulsar. The period, dm, and other relavent pulsar parameters come from the NANOGrav 11-yr data release.
```
# Define our signal variables.
f0 = 1500 # center observing frequecy in MHz
bw = 800.0 # observation MHz
Nf = 64 # number of frequency channels
# We define the pulse period early here so we can similarly define the frequency
period = 0.00457 # pulsar period in seconds for J1713+0747
f_samp = (1.0/period)*2048*10**-6 # sample rate of data in MHz (here 2048 samples across the pulse period)
sublen = 60.0 # subintegration length in seconds, or rate to dump data at
# Now we define our signal
signal_1713_GBT = pss.signal.FilterBankSignal(fcent = f0, bandwidth = bw, Nsubband=Nf, sample_rate = f_samp,
sublen = sublen, fold = True) # fold is set to `True`
```
## The Pulsar and Profiles
Now we will load the pulse profile as in Tutorial 3 and initialize a single `Pulsar` object.
```
# First we load the data array
path = 'psrsigsim/data/J1713+0747_profile.npy'
J1713_dataprof = np.load(path)
# Now we define the data profile
J1713_prof = pss.pulsar.DataProfile(J1713_dataprof)
# Define the values needed for the puslar
Smean = 0.009 # The mean flux of the pulsar, J1713+0747 at 1400 MHz from the ATNF pulsar catatlog, here 0.009 Jy
psr_name = "J1713+0747" # The name of our simulated pulsar
# Now we define the pulsar with the scaled J1713+0747 profiles
pulsar_J1713 = pss.pulsar.Pulsar(period, Smean, profiles=J1713_prof, name = psr_name)
# define the observation length
obslen = 60.0*20 # seconds, 20 minutes in total
```
## The ISM
Here we define the `ISM` class used to disperse the simulated pulses.
```
# Define the dispersion measure
dm = 15.921200 # pc cm^-3
# And define the ISM object, note that this class takes no initial arguements
ism_sim = pss.ism.ISM()
```
## Defining Telescopes
Here we will show how to use the two predefined telescopes, Green Bank and Arecibo, and the systems accociated with them. We will also show how to define a `telescope` from scratch, so that any current or future telescopes and systems can be simulated.
### Predefined Telescopes
We start off by showing the two predefined telescopes.
```
# Define the Green Bank Telescope
tscope_GBT = pss.telescope.telescope.GBT()
# Define the Arecibo Telescope
tscope_AO = pss.telescope.telescope.Arecibo()
```
Each telescope is made up of one or more `systems` consisting of a `Reciever` and a `Backend`. For the predefined telescopes, the systems for the `GBT` are the L-band-GUPPI system or the 800 MHz-GUPPI system. For `Arecibo` these are the 430 MHz-PUPPI system or the L-band-PUPPI system. One can check to see what these systems and their parameters are as we show below.
```
# Information about the GBT systems
print(tscope_GBT.systems)
# We can also find out information about a receiver that has been defined here
rcvr_LGUP = tscope_GBT.systems['Lband_GUPPI'][0]
print(rcvr_LGUP.bandwidth, rcvr_LGUP.fcent, rcvr_LGUP.name)
```
### Defining a new system
One can also add a new system to one of these existing telescopes, similarly to what will be done when define a new telescope from scratch. Here we will add the 350 MHz receiver with the GUPPI backend to the Green Bank Telescope.
First we define a new `Receiver` and `Backend` object. The `Receiver` object needs a center frequency of the receiver in MHz, a bandwidth in MHz to be centered on that center frequency, and a name. The `Backend` object needs only a name and a sampling rate in MHz. This sampling rate should be the maximum sampling rate of the backend, as it will allow lower sampling rates, but not higher sampling rates.
```
# First we define a new receiver
rcvr_350 = pss.telescope.receiver.Receiver(fcent=350, bandwidth=100, name="350")
# And then we want to use the GUPPI backend
guppi = pss.telescope.backend.Backend(samprate=3.125, name="GUPPI")
# Now we add the new system. This needs just the receiver, backend, and a name
tscope_GBT.add_system(name="350_GUPPI", receiver=rcvr_350, backend=guppi)
# And now we check that it has been added
print(tscope_GBT.systems["350_GUPPI"])
```
### Defining a new telescope
We can also define a new telescope from scratch. In addition to needing the `Receiver` and `Backend` objects to define at least one system, the `telescope` also needs the aperture size in meters, the total area in meters^2, the system temperature in kelvin, and a name. Here we will define a small 3-meter aperture circular radio telescope that you might find at a University or somebody's backyard.
```
# We first need to define the telescope parameters
aperture = 3.0 # meters
area = (0.5*aperture)**2*np.pi # meters^2
Tsys = 250.0 # kelvin, note this is not a realistic system temperature for a backyard telescope
name = "Backyard_Telescope"
# Now we can define the telescope
tscope_bkyd = pss.telescope.Telescope(aperture, area=area, Tsys=Tsys, name=name)
```
Now similarly to defining a new system before, we must add a system to our new telescope by defining a receiver and a backend. Since this just represents a little telescope, the system won't be comparable to the previously defined telescope.
```
rcvr_bkyd = pss.telescope.receiver.Receiver(fcent=1400, bandwidth=20, name="Lband")
backend_bkyd = pss.telescope.backend.Backend(samprate=0.25, name="Laptop") # Note this is not a realistic sampling rate
# Add the system to our telecope
tscope_bkyd.add_system(name="bkyd", receiver=rcvr_bkyd, backend=backend_bkyd)
# And now we check that it has been added
print(tscope_bkyd.systems)
```
## Observing with different telescopes
Now that we have three different telescopes, we can observe our simulated pulsar with all three and compare the sensitivity of each telescope for the same initial `Signal` and `Pulsar`. Since the radiometer noise from the telescope is added directly to the signal though, we will need to define two additional `Signals` and create pulses for them before we can observe them with different telescopes.
```
# We define three new, similar, signals, one for each telescope
signal_1713_AO = pss.signal.FilterBankSignal(fcent = f0, bandwidth = bw, Nsubband=Nf, sample_rate = f_samp,
sublen = sublen, fold = True)
# Our backyard telescope will need slightly different parameters to be comparable to the other signals
f0_bkyd = 1400.0 # center frequency of our backyard telescope
bw_bkyd = 20.0 # Bandwidth of our backyard telescope
Nf_bkyd = 1 # only process one frequency channel 20 MHz wide for our backyard telescope
signal_1713_bkyd = pss.signal.FilterBankSignal(fcent = f0_bkyd, bandwidth = bw_bkyd, Nsubband=Nf_bkyd, \
sample_rate = f_samp, sublen = sublen, fold = True)
# Now we make pulses for all three signals
pulsar_J1713.make_pulses(signal_1713_GBT, tobs = obslen)
pulsar_J1713.make_pulses(signal_1713_AO, tobs = obslen)
pulsar_J1713.make_pulses(signal_1713_bkyd, tobs = obslen)
# And disperse them
ism_sim.disperse(signal_1713_GBT, dm)
ism_sim.disperse(signal_1713_AO, dm)
ism_sim.disperse(signal_1713_bkyd, dm)
# And now we observe with each telescope, note the only change is the system name. First the GBT
tscope_GBT.observe(signal_1713_GBT, pulsar_J1713, system="Lband_GUPPI", noise=True)
# Then Arecibo
tscope_AO.observe(signal_1713_AO, pulsar_J1713, system="Lband_PUPPI", noise=True)
# And finally our little backyard telescope
tscope_bkyd.observe(signal_1713_bkyd, pulsar_J1713, system="bkyd", noise=True)
```
Now we can look at the simulated data and compare the sensitivity of the different telescopes. We first plot the observation from the GBT, then Arecibo, and then our newly defined backyard telescope.
```
# We first plot the first two pulses in frequency-time space to show the undispersed pulses
time = np.linspace(0, obslen, len(signal_1713_GBT.data[0,:]))
# Since we know there are 2048 bins per pulse period, we can index the appropriate amount
plt.plot(time[:4096], signal_1713_GBT.data[0,:4096], label = signal_1713_GBT.dat_freq[0])
plt.plot(time[:4096], signal_1713_GBT.data[-1,:4096], label = signal_1713_GBT.dat_freq[-1])
plt.ylabel("Intensity")
plt.xlabel("Time [s]")
plt.legend(loc = 'best')
plt.title("L-band GBT Simulation")
plt.show()
plt.close()
# And the 2-D plot
plt.imshow(signal_1713_GBT.data[:,:4096], aspect = 'auto', interpolation='nearest', origin = 'lower', \
extent = [min(time[:4096]), max(time[:4096]), signal_1713_GBT.dat_freq[0].value, signal_1713_GBT.dat_freq[-1].value])
plt.ylabel("Frequency [MHz]")
plt.xlabel("Time [s]")
plt.colorbar(label = "Intensity")
plt.show()
plt.close()
# Since we know there are 2048 bins per pulse period, we can index the appropriate amount
plt.plot(time[:4096], signal_1713_AO.data[0,:4096], label = signal_1713_AO.dat_freq[0])
plt.plot(time[:4096], signal_1713_AO.data[-1,:4096], label = signal_1713_AO.dat_freq[-1])
plt.ylabel("Intensity")
plt.xlabel("Time [s]")
plt.legend(loc = 'best')
plt.title("L-band AO Simulation")
plt.show()
plt.close()
# And the 2-D plot
plt.imshow(signal_1713_AO.data[:,:4096], aspect = 'auto', interpolation='nearest', origin = 'lower', \
extent = [min(time[:4096]), max(time[:4096]), signal_1713_AO.dat_freq[0].value, signal_1713_AO.dat_freq[-1].value])
plt.ylabel("Frequency [MHz]")
plt.xlabel("Time [s]")
plt.colorbar(label = "Intensity")
plt.show()
plt.close()
# Since we know there are 2048 bins per pulse period, we can index the appropriate amount
plt.plot(time[:4096], signal_1713_bkyd.data[0,:4096], label = "1400.0 MHz")
plt.ylabel("Intensity")
plt.xlabel("Time [s]")
plt.legend(loc = 'best')
plt.title("L-band Backyard Telescope Simulation")
plt.show()
plt.close()
```
We can see that, as expected, the Arecibo telescope is more sensitive than the GBT when observing over the same timescale. We can also see that even though the simulated pulsar here is easily visible with these large telescopes, our backyard telescope is not able to see the pulsar over the same amount of time, since the output is pure noise. The `PsrSigSim` can be used to determine the approximate sensitivity of an observation of a simulated pulsar with any given telescope that can be defined.
### Note about randomly generated pulses and noise
`PsrSigSim` uses `numpy.random` under the hood in order to generate the radio pulses and various types of noise. If a user desires or requires that this randomly generated data is reproducible we recommend using a call to the seed generator native to `Numpy` before calling the function that produces the random noise/pulses. Newer versions of `Numpy` are moving toward slightly different [functionality/syntax](https://numpy.org/doc/stable/reference/random/index.html), but are essentially used in the same way.
```
numpy.random.seed(1776)
pulsar_1.make_pulses(signal_1, tobs=obslen)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/sitori8354/eatago/blob/main/Eatago.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip3 install -U pywebio
# !pip install -U https://github.com/solrz/PyWebIO/archive/dev.zip
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip
!wget https://raw.githubusercontent.com/solrz/pywebio-example/main/hello_world.py
!pip3 install nest-asyncio
import random
import os
import time
assign_port = 80
import nest_asyncio
from pywebio.input import *
from pywebio.output import *
import pywebio
from multiprocessing import Process
import sys, json
import asyncio
try:
del new_server
del start_app
except:
pass
nest_asyncio.apply()
def new_server(target_app, port):
try:
if pywebio:
from importlib import reload
reload(pywebio)
except:
pass
return lambda: pywebio.start_server(target_app, port=port)
previous_process = None
ports = [80]
def start_app(target_app):
assign_port = random.randint(5000,9999)
global ports
while assign_port in ports:
assign_port += 1
ports += [assign_port]
tunnel_port = f'1{assign_port}'
ngrok_file = f'./ngrok_config_{tunnel_port}'
with open(ngrok_file, 'w') as f:
f.write('web_addr: '+tunnel_port)
get_ipython().system_raw(f'./ngrok http {assign_port} --config "{ngrok_file}" &')
time.sleep(2)
forward_info_raw = !curl -s http://localhost:$tunnel_port/api/tunnels
# print(forward_info_raw)
forward_info = json.loads(forward_info_raw[0])
print(f'請拖曳網址到新視窗來打開App(每次網址都會更新喔!): \n{forward_info["tunnels"][0]["public_url"]}')
# | python3 -c \
# 'import sys, json; print("請拖曳網址到新視窗來打開App(每次網址都會更新喔!): " +json.load(sys.stdin)["tunnels"][0]["public_url"])'
global previous_process
if previous_process:
previous_process.terminate()
previous_process = Process(target=new_server(target_app, assign_port))
previous_process.daemon = True
previous_process.start()
def example_hello_world_app():
put_text('hello_world!')
name = input('花枝魷魚麵?')
put_text(f'{name} 早安, 歡迎使用我的App')
def inertactive_app():
enable_print_convert_to_put_text = True
put_markdown('# Python 即時編譯Web App')
put_markdown('在這裡,你可以直接輸入程式碼,快速測試你的code')
code = "put_markdown('# 我可以讀透你的內心...')\nname = input('首先,請問大名?')\nput_text(f'你的名字叫{name},猜對了吧!')\n"
round = 1
while True:
put_text(f'第{round}次執行')
code = textarea('輸入程式', code={
'mode': "python", # code language
'theme': 'darcula', # Codemirror theme. Visit https://codemirror.net/demo/theme.html#cobalt to get more themes
}, value=code)
put_markdown('---')
put_code(code)
try:
exec(code.replace('print(','put_text('))
except Exception as e:
put_text(e)
put_markdown('---')
round += 1
# 把 start_app(example_hello_world_app) 的 「example_hello_world_app」是會執行App
def task_1():
put_text('task_1')
put_buttons(['Go task 2'], [lambda: go_app('task_2')])
hold()
def task_2():
put_text('task_2')
put_buttons(['Go task 1'], [lambda: go_app('task_1')])
gift = select('what?', ['ta','ya'])
hold()
def index():
put_link('Go task 1', app='task_1') # 使用app参数指定任务名
put_link('Go task 2', app='task_2')
# 等价于 start_server({'index': index, 'task_1': task_1, 'task_2': task_2})
start_app([index, task_1, task_2])
```
|
github_jupyter
|
```
import glob
import os
import sys
import struct
import pandas as pd
from nltk.tokenize import sent_tokenize
from tensorflow.core.example import example_pb2
sys.path.append('../src')
import data_io, params, SIF_embedding
def return_bytes(reader_obj):
len_bytes = reader_obj.read(8)
str_len = struct.unpack('q', len_bytes)[0]
e_s = struct.unpack("%ds" % str_len, reader_obj.read(str_len))
es = e_s[0]
c = example_pb2.Example.FromString(es)
article = str(c.features.feature['article'].bytes_list.value[0])
abstract = str(c.features.feature['abstract'].bytes_list.value[0])
ab = sent_tokenize(abstract)
clean_article = sent_tokenize(article)
clean_abstract = '. '.join([' '.join(s for s in x.split() if s.isalnum()) for x in ''.join(ab).replace("<s>","").split("</s>")]).strip()
return clean_abstract, clean_article, abstract
def load_embed(wordfile, weightfile, weightpara=1e-3, param=None, rmpc=0):
'''
wordfile: : location of embedding data (e.g., glove embedings)
weightfile: : location of TF data for words
weightpara: : the parameter in the SIF weighting scheme, usually in range [3e-5, 3e-3]
rmpc: : number of principal components to remove in SIF weighting scheme
'''
# input
wordfile = '/home/francisco/GitHub/SIF/data/glove.840B.300d.txt' # word vector file, can be downloaded from GloVe website
weightfile = '/home/francisco/GitHub/SIF/auxiliary_data/enwiki_vocab_min200.txt' # each line is a word and its frequency
# load word vectors
(words, Weights) = data_io.getWordmap(wordfile)
# load word weights
word2weight = data_io.getWordWeight(weightfile, weightpara) # word2weight['str'] is the weight for the word 'str'
weight4ind = data_io.getWeight(words, word2weight) # weight4ind[i] is the weight for the i-th word
# set parameters
param.rmpc = rmpc
return Weights, words, word2weight, weight4ind
def return_sif(sentences, words, weight4ind, param, Weights):
# x is the array of word indices, m is the binary mask indicating whether there is a word in that location
x, m = data_io.sentences2idx(sentences, words)
w = data_io.seq2weight(x, m, weight4ind) # get word weights
# get SIF embedding
embeddings = SIF_embedding.SIF_embedding(Weights, x, w, param) # embedding[i,:] is the embedding for sentence i
return embeddings
def embed_sentences(wordfile, weightfile, weightpara, param, rmpc, file_list):
Weights, words, word2weight, weight4ind = load_embed(wordfile, weightfile, weightpara, param, rmpc)
print('embeddings loaded...')
for file_i in file_list:
input_file = open(file_i, 'rb')
while input_file:
clean_abstract, clean_article = return_bytes(input_file)
clean_article = [' '.join([s for s in x if s.isalnum()]) for x in sdf['sentence'].str.split(" ")]
print('article cleaned...')
embeddings = return_sif(clean_article, words, weight4ind, param, Weights)
sdf = pd.DataFrame(clean_article, columns=['sentence'])
sdf['clean_sentence'] = [' '.join([s for s in x if s.isalnum()]) for x in sdf['sentence'].str.split(" ")]
sdf['summary'] = clean_abstract
sdf.ix[1:, 'summary'] = ''
embcols = ['emb_%i'%i for i in range(embeddings.shape[1])]
emb = pd.DataFrame(embeddings, columns = embcols)
sdf = pd.concat([sdf, emb], axis=1)
sdf = sdf[[sdf.columns[[2, 0, 1]].tolist() + sdf.columns[3:].tolist()]]
print(sdf.head())
break
break
myparams = params.params()
mainpath = 'home/francisco/GitHub/SIF/'
wordf = os.path.join(mainpath, 'data/glove.840B.300d.txt')
weightf = os.path.join(mainpath, 'auxiliary_data/enwiki_vocab_min200.txt')
wp = 1e-3
rp = 0
fl = ['/home/francisco/GitHub/cnn-dailymail/finished_files/chunked/train_000.bin']
wordfile, weightfile, weightpara, param, rmpc, file_list = wordf, weightf, wp, myparams, rp, fl
Weights, words, word2weight, weight4ind = load_embed(wordfile, weightfile, weightpara, param, rmpc)
clean_abstract
print('embeddings loaded...')
for file_i in file_list:
input_file = open(file_i, 'rb')
while input_file:
clean_abstract, clean_article, abstractx = return_bytes(input_file)
print('article cleaned...')
embeddings = return_sif(clean_article, words, weight4ind, param, Weights)
sdf = pd.DataFrame(clean_article, columns=['sentence'])
sdf['clean_sentence'] = [' '.join([s for s in x if s.isalnum()]) for x in sdf['sentence'].str.split(" ")]
sdf['summary'] = clean_abstract
sdf.ix[1:, 'summary'] = ''
embcols = ['emb_%i'%i for i in range(embeddings.shape[1])]
emb = pd.DataFrame(embeddings, columns = embcols)
sdf = pd.concat([sdf, emb], axis=1)
sdf = sdf[['summary', 'sentence', 'clean_sentence'] + sdf.columns[3:].tolist()].head()
print(sdf.head())
break
break
clean_abstract
abstractx
sdf['sentence'][0].split(" ")[0]
dfile = "/home/francisco/GitHub/DQN-Event-Summarization/SIF/data/metadata/cnn_dm_metadata.csv"
md = pd.read_csv(dfile)
md.head()
md.shape
md.describe()
import matplotlib.pyplot as plt
from sklearn.neighbors.kde import KernelDensity
import numpy as np
def cdfplot(xvar):
sortedvals=np.sort( xvar)
yvals=np.arange(len(sortedvals))/float(len(sortedvals))
plt.plot( sortedvals, yvals )
plt.grid()
plt.show()
%matplotlib inline
cdfplot(md['nsentences'])
cdfplot(md['sentences_nchar'])
cdfplot(md['summary_ntokens'])
```
|
github_jupyter
|
## FCLA/FNLA Fast.ai Numerical/Computational Linear Algebra
### Lecture 3: New Perspectives on NMF, Randomized SVD
Notes / In-Class Questions
WNixalo - 2018/2/8
Question on section: [Truncated SVD](http://nbviewer.jupyter.org/github/fastai/numerical-linear-algebra/blob/master/nbs/2.%20Topic%20Modeling%20with%20NMF%20and%20SVD.ipynb#More-Details)
Given A: `m` x `n` and Q: `m` x `r`; is Q the identity matrix?
A≈QQTA
```
import torch
import numpy as np
Q = np.eye(3)
print(Q)
print(Q.T)
print(Q @ Q.T)
# construct I matrix
Q = torch.eye(3)
# torch matrix multip
# torch.mm(Q, Q.transpose)
Q @ torch.t(Q)
```
So if A is *approx equal* to Q•Q.T•A .. but *not* equal.. then Q is **not** the identity, but is very close to it.
Oh, right. Q: m x r, **not** m x m...
If both the columns and rows of Q had been orthonormal, then it would have been the Identity, but only the columns (r) are orthonormal.
Q is a tall, skinny matrix.
---
AW gives range(A). AW has far more rows than columns ==> in practice these columns are approximately orthonormal (v.unlikely to get lin-dep cols when choosing random values).
QR decomposition is foundational to Numerical Linear Algebra.
Q consists of orthonormal columns, R is upper-triangular.
**Calculating Truncated-SVD:**
1\. Compute approximation to range(A). We want Q with r orthonormal columns such that $$A\approx QQ^TA$$
2\. Construct $B = Q^T A$, which is small ($r\times n$)
3\. Compute the SVD of $B$ by standard methods (fast since $B$ is smaller than $A$): $B = S\, Σ V^T$
4\. Since: $$A \approx QQ^TA = Q(S \, ΣV^T)$$ if we set $U = QS$, then we have a low rank approximation $A \approx UΣV^T$.
**How to choose $r$?**
If we wanted to get 5 cols from a matrix of 100 cols, (5 topics). As a rule of thumb, let's go for 15 instead. You don't want to explicitly pull exactly the amount you want due to the randomized component being present, so you add some buffer.
Since our projection is approximate, we make it a little bigger than we need.
**Implementing Randomized SVD:**
First we want a randomized range finder.
```
import numpy as np
from sklearn.datasets import fetch_20newsgroups
from sklearn import decomposition
from scipy import linalg
import matplotlib.pyplot as plt
%matplotlib inline
np.set_printoptions(suppress=True)
categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']
remove = ('headers', 'footers', 'quotes')
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories, remove=remove)
# newsgroups_test = fetch_20newsgroups(subset='test', categories=categories, remove=remove)
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
vectorizer = CountVectorizer(stop_words='english')
vectors = vectorizer.fit_transform(newsgroups_train.data).todense() # (documents, vocab)
vocab = np.array(vectorizer.get_feature_names())
num_top_words=8
def show_topics(a):
top_words = lambda t: [vocab[i] for i in np.argsort(t)[:-num_top_words-1:-1]]
topic_words = ([top_words(t) for t in a])
return [' '.join(t) for t in topic_words]
# computes an orthonormal matrix whose range approximates the range of A
# power_iteration_normalizer can be safe_sparse_dot (fast but unstable), LU (imbetween), or QR (slow but most accurate)
def randomized_range_finder(A, size, n_iter=5):
# randomly init our Mat to our size; size: num_cols
Q = np.random.normal(size=(A.shape[1], size))
# LU decomp (lower triang * upper triang mat)
# improves accuracy & normalizes
for i in range(n_iter):
Q, _ = linalg.lu(A @ Q, permute_l=True)
Q, _ = linalg.lu(A.T @ Q, permute_l=True)
# QR decomp on A & Q
Q, _ = linalg.qr(A @ Q, mode='economic')
return Q
```
Randomized SVD method:
```
def randomized_svd(M, n_components, n_oversamples=10, n_iter=4):
# number of random columns we're going to create is the number of
# columns we want + number of oversamples (extra buffer)
n_random = n_components + n_oversamples
Q = randomized_range_finder(M, n_random, n_iter)
# project M to the (k + p) dimensional space using basis vectors
B = Q.T @ M
# compute SVD on the thin matrix: (k + p) wide
Uhat, s, V = linalg.svd(B, full_matrices=False)
del B
U = Q @ Uhat
# return the number of components we want from U, s, V
return U[:, :n_components], s[:n_components], V[:n_components, :]
%time u, s, v = randomized_svd(vectors, 5)
u.shape, s.shape, v.shape
show_topics(v)
```
Computational Complexity for a M`x`N matrix in SVD is $M^2N+N^3$, so Randomized (Truncated?) SVD is a *massive* improvement.
---
2018/3/7
Write a loop to calculate the error of your decomposition as your vary the # of topics. Plot the results.
```
# 1. how do I calculate decomposition error?:
# I guess I'll use MSE?
# # NumPy: # https://stackoverflow.com/questions/16774849/mean-squared-error-in-numpy
# def MSEnp(A,B):
# if type(A) == np.ndarray and type(B) == np.ndarray:
# return ((A - B) ** 2).mean()
# else:
# return np.square((A - B)).mean()
# Scikit-Learn:
from sklearn import metrics
MSE = metrics.mean_squared_error # usg: mse(A,B)
# 2. Now how to recompose my decomposition?:
%time B = vectors # original matrix
%time U, S, V = randomized_svd(B, 10) # num_topics = 10
# S is vector of Σ's singular values. Convert back to matrix:
%time Σ = S * np.eye(S.shape[0])
# from SVD formula: A ≈ U@Σ@V.T
%time A = U@Σ@V ## apparently randomized_svd returns V.T, not V ?
# 3. Finally calculated error I guess:
%time mse_error = MSE(A,B)
print(mse_error)
# Im putting way too much effort into this lol
def fib(n):
if n <= 1:
return n
else:
f1 = 1
f2 = 0
for i in range(n):
t = f1 + f2
tmp = f2
f2 += f1
f1 = tmp
return t
for i,e in enumerate(num_topics):
print(f'Topics: {num_topics[i]:>3} ',
f'Time: {num_topics[i]:>3}')
## Setup
import time
B = vectors
num_topics = [fib(i) for i in range(2,14)]
TnE = [] # time & error
## Loop:
for n_topics in num_topics:
t0 = time.time()
U, S, Vt = randomized_svd(B, n_topics)
Σ = S * np.eye(S.shape[0])
A = U@Σ@Vt
TnE.append([time.time() - t0, MSE(A,B)])
for i, tne in enumerate(TnE):
print(f'Topics: {num_topics[i]:>3} '
f'Time: {np.round(tne[0],3):>3} '
f'Error: {np.round(tne[1],12):>3}')
# https://matplotlib.org/users/pyplot_tutorial.html
plt.plot(num_topics, [tne[1] for tne in TnE])
plt.xlabel('No. Topics')
plt.ylabel('MSE Error')
plt.show()
## R.Thomas' class solution:
step = 20
n = 20
error = np.zeros(n)
for i in range(n):
U, s, V = randomized_svd(vectors, i * step)
reconstructed = U @ np.diag(s) @ V
error[i] = np.linalg.norm(vectors - reconstructed)
plt.plot(range(0,n*step,step), error)
```
Looks like she used the Norm instead of MSE. Same curve shape.
Here's why I used the fibonacci sequence for my topic numbers. This solution took much longer than mine (i=20 vs i=12) with more steps, yet mine appears smoother. Why? I figured this was the shape of curve I'd get: ie interesting bit is in the beginning, so I used a number sequence that spread out as you went so you'd get higher resolution early on. Yay.
---
**NOTE**: random magical superpower Machine Learning Data Analytics *thing*: ***Johnson-Lindenstrauss lemma***:
basically if you have a matrix with too many columns to work with (leading to overfitting or w/e else), multiple it by some random (square?) matrix and you'll preserve its properties but in a workable shape
https://en.wikipedia.org/wiki/Johnson-Lindenstrauss_lemma
|
github_jupyter
|
# USDA Unemployment
<hr>
```
import pandas as pd
import os
import matplotlib.pyplot as plt
import seaborn as sns
```
# Data
## US Unemployment data by county
Economic Research Service
U.S. Department of Agriculture
link:
### Notes
- Year 2020, Median Household Income (2019), & '% of State Median HH Income had 78 Nan Values that are all from Puerto Rico.
- I am going to drop all rows from Puerto Rico, Puerto Rico does not show up in any of the other USDA data. If we want it back in, it will be easy to re-add the Puerto Rico data.
## Contants
<hr>
```
stats_master_list = ['Vermont',
'Mississippi',
'Maine',
'Montana',
'Washington',
'District of Columbia',
'Texas',
'Alabama',
'Michigan',
'Maryland',
'Rhode Island',
'South Dakota',
'Nebraska',
'Virginia',
'Florida',
'Utah',
'Louisiana',
'Missouri',
'Massachusetts',
'South Carolina',
'Pennsylvania',
'Tennessee',
'Minnesota',
'Idaho',
'Alaska',
'Oklahoma',
'North Dakota',
'Arkansas',
'Georgia',
'New Hampshire',
'Indiana',
'Puerto Rico',
'New Jersey',
'Delaware',
'West Virginia',
'Colorado',
'New York',
'Kansas',
'Arizona',
'Ohio',
'Hawaii',
'Illinois',
'Oregon',
'North Carolina',
'California',
'Kentucky',
'Wyoming',
'Iowa',
'Nevada',
'Connecticut',
'Wisconsin',
'New Mexico']
# column Names
columns = [ 'FIPS ', 'Name',
'2012', 2013,
2014, 2015,
2016, 2017,
2018, 2019,
'2020', 'Median Household Income (2019)',
'% of State Median HH Income']
"""
Duplicate check 3
from
https://thispointer.com/python-3-ways-to-check-if-there-are-duplicates-in-a-list/
"""
def checkIfDuplicates_3(listOfElems):
''' Check if given list contains any duplicates '''
for elem in listOfElems:
if listOfElems.count(elem) > 1:
return True
return False
```
## File managment
<hr>
```
files = os.listdir("../data_raw/USDA_gov-unemplyment/")
# remove mac file
files.remove('.DS_Store')
#files
```
# Example of the csv files
<hr>
```
# random peek
df = pd.read_excel('../data_raw/USDA_gov-unemplyment/UnemploymentReport (14).xlsx', skiprows=2)
df.shape
df.head()
df.tail()
```
# Create master DataFrame
<hr>
```
# Concat
# create master file
master_df = pd.DataFrame(columns = columns)
state_name_list = []
# LOOP
for file in files:
# read excel file
_df = pd.read_excel('../data_raw/USDA_gov-unemplyment/'+file, skiprows=2)
# read state_name
state_name = _df.iloc[0,1]
# DROP
#drop row 0
_df.drop(0, inplace = True)
# Drop last 2 rows
_df.drop(_df.tail(1).index, inplace = True)
# work around to drop NaN column
_temp_df = _df.iloc[:,0:12]
# work around to drop NaN column
_temp_df['% of State Median HH Income'] = _df['% of State Median HH Income']
# add Column for STATE name
# add state column
_temp_df['state'] = state_name
state_name_list.append(state_name)
# Concat
master_df = pd.concat([master_df, _temp_df])
```
<br>
## Dataframe clean up
<hr>
```
# reset Index
master_df.reset_index(drop = True, inplace = True )
master_df.columns
# Rename columns
master_df.rename(columns = {'FIPS ':'FIPS'}, inplace = True)
# shape
master_df.shape
master_df.head()
```
## Remove rows with all nan's
<hr>
```
master_df.isna().sum()
master_df[ master_df['FIPS'].isnull()].head()
nan_rows = master_df[ master_df['FIPS'].isnull()].index
nan_rows
len(nan_rows)
# remove rows with all Nans
master_df.drop(nan_rows, inplace = True)
master_df.isna().sum()
master_df[ master_df['2020'].isnull()].iloc[20:25,:]
```
- There are 78 rows that do have nans for 2020,
- all of the Remaing rows with nan's are form Puerto Rico
- I am going to remove the Nans from Puerto Rico because the other USDA data sets do not have Puerto Rico
```
master_df[ master_df['state'] == 'Puerto Rico' ].index
# Drop all Rows with state as Puerto Rico
index_names = master_df[ master_df['state'] == 'Puerto Rico' ].index
master_df.drop(index_names, inplace = True)
master_df.drop([], inplace = True )
master_df.isna().sum()
master_df.shape
```
<br>
# Sanity Check
<hr>
```
# unique Count of stats
master_df['state'].nunique()
len(state_name_list)
# checks if there are duplicates in state list
checkIfDuplicates_3(state_name_list)
master_df['state'].nunique()
```
# Write to CSV
<hr>
```
master_df.to_csv('../data/USDA/USDA_unemployment.csv', index=False)
master_df.shape
```
<br>
# EDA
```
master_df.shape
master_df.head(2)
plt.figure(figsize = (17, 17))
sns.scatterplot(data = master_df, x = '2020', y = "Median Household Income (2019)", hue = 'state');
plt.xlabel("% of unemployment")
plt.title("% of Unemployment by Household Median income 2019")
set(master_df['FIPS'])
```
|
github_jupyter
|
```
import pandas as pd
import bs4 as bs
dfs=pd.read_html('https://en.wikipedia.org/wiki/Research_stations_in_Antarctica#List_of_research_stations')
dfr=pd.read_html('https://en.wikipedia.org/wiki/Antarctic_field_camps')
df=dfs[1][1:]
df.columns=dfs[1].loc[0].values
df.to_excel('bases.xlsx')
import requests
url='https://en.wikipedia.org/wiki/Research_stations_in_Antarctica'
f=requests.get(url).content
soup = bs.BeautifulSoup(f, 'lxml')
parsed_table = soup.find_all('table')[1]
data = [[''.join(td.strings)+'#'+td.a['href'] if td.find('a') else
''.join(td.strings)
for td in row.find_all('td')]
for row in parsed_table.find_all('tr')]
headers=[''.join(row.strings)
for row in parsed_table.find_all('th')]
df = pd.DataFrame(data[1:], columns=headers)
stations=[]
for i in df.T.iteritems():
helper={}
dummy=i[1][0].split('#')
dummy0=dummy[0].split('[')[0].replace('\n',' ').replace('\n',' ').replace('\n',' ')
helper['name']=dummy0
helper['link']='https://en.wikipedia.org'+dummy[1]
dummy=i[1][2].replace('\n',' ').replace('\n',' ').replace('\n',' ')
if 'ummer since' in dummy:dummy='Permanent'
dummy=dummy.split('[')[0]
if 'emporary summer' in dummy:dummy='Summer'
if 'intermittently Summer' in dummy:dummy='Summer'
helper['type']=dummy
dummy=i[1][3].split('#')[0].replace('\n',' |').replace(']','').replace('| |','|')[1:]
if '' == dummy:dummy='Greenpeace'
helper['country']=dummy
dummy=i[1][4].replace('\n',' ').replace('\n',' ').replace('\n',' ').split(' ')[0]
if 'eteo' in dummy:dummy='1958'
helper['opened']=dummy
dummy=i[1][5].split('#')[0].replace('\n',' | ').replace('| and |','|').split('[')[0].replace('.','')
helper['program']=dummy
dummy=i[1][6].split('#')[0].replace('\n',', ').replace('| and |','|').split('[')[0].replace('.','')
helper['location']=dummy
dummy=i[1][7].replace('\n',' ')
if ' ' in dummy:
if 'Active' in dummy: dummy='Active'
elif 'Relocated to Union Glacier' in dummy: dummy='2014'
elif 'Unmanned activity' in dummy: dummy='Active'
elif 'Abandoned and lost' in dummy: dummy='1999'
elif 'Dismantled 1992' in dummy: dummy='1992'
elif 'Temporary abandoned since March 2017' in dummy: dummy='Active'
elif 'Reopened 23 November 2017' in dummy: dummy='Active'
elif 'Abandoned and lost' in dummy: dummy='1999'
else: dummy=dummy.split(' ')[1]
if dummy=='Active':
helper['active']=True
helper['closed']='9999'
else:
helper['active']=False
helper['closed']=dummy
if dummy=='Closed':
helper['active']=True
helper['closed']='9999'
dummy=i[1][8].replace('\n',', ').split('/')[2].split('(')[0].split('#')[0].split(',')[0].split('Coor')[0].split(u'\ufeff')[0].split(';')
helper['latitude']=dummy[0][1:]
helper['longitude']=dummy[1][1:]#.replace(' 0',' 0.001')[1:]
stations.append(helper)
dta=pd.DataFrame(stations)
dta.to_excel('stations.xlsx')
import cesiumpy
dta
iso2=pd.read_html('https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2')[2]
iso22=iso2[1:].set_index(1)[[0]]
def cc(c):
d=c.split('|')[0].strip()
if d=='Czech Republic': return 'CZ'
elif d=='Greenpeace': return 'AQ'
elif d=='Soviet Union': return 'RU'
elif d=='Russia': return 'RU'
elif d=='United States': return 'US'
elif d=='East Germany': return 'DE'
elif d=='United Kingdom': return 'GB'
elif d=='South Korea': return 'KR'
else: return iso22.loc[d][0]
flags=[]
for i in dta['country']:
flags.append('flags/glass2/'+cc(i).lower()+'.png')
dta['flag']=flags
dta[['name','link','active','type']].to_excel('links.xlsx')
```
Manually filled pop.xlsx
```
pop=pd.read_excel('pop.xlsx')
dta['summer']=pop['summer']
dta['winter']=pop['winter']
dta.to_excel('alldata.xlsx')
dta.set_index('name').T.to_json('antarctica.json')
v = cesiumpy.Viewer(animation=False, baseLayerPicker=True, fullscreenButton=True,
geocoder=False, homeButton=False, infoBox=True, sceneModePicker=True,
selectionIndicator=True, navigationHelpButton=False,
timeline=False, navigationInstructionsInitiallyVisible=True)
x=dta[dta['active']]
for i, row in x.iterrows():
r=0.7
t=10000
lon=float(row['longitude'])
lat=float(row['latitude'])
l0 = float(1**r)*t
cyl = cesiumpy.Cylinder(position=[lon, lat, l0/2.], length=l0,
topRadius=2.5e4, bottomRadius=2.5e4, material='grey',\
name=row['name'])
v.entities.add(cyl)
l1 = (float(row['summer'])**r)*t
cyl = cesiumpy.Cylinder(position=[lon, lat, l1/2.], length=l1*1.1,
topRadius=3e4, bottomRadius=3e4, material='crimson',\
name=row['name'])
v.entities.add(cyl)
l2 = float(row['winter']**r)*t
cyl = cesiumpy.Cylinder(position=[lon, lat, l2/2.], length=l2*1.2,
topRadius=6e4, bottomRadius=6e4, material='royalBlue',\
name=row['name'])
v.entities.add(cyl)
pin = cesiumpy.Pin.fromText(row['name'], color=cesiumpy.color.GREEN)
b = cesiumpy.Billboard(position=[float(row['longitude']), float(row['latitude']), l1*1.1+70000], \
image = row['flag'], scale=0.6,\
name=row['name'], pixelOffset = (0,0))
v.entities.add(b)
label = cesiumpy.Label(position=[float(row['longitude']), float(row['latitude']), l1*1.1+70000],\
text=row['name'], scale=0.6, name=row['name'],
pixelOffset = (0,22))
v.entities.add(label)
with codecs.open("index.html", "w", encoding="utf-8") as f:
f.write(v.to_html())
v
```
|
github_jupyter
|
_Lambda School Data Science_
This sprint, your project is about water pumps in Tanzania. Can you predict which water pumps are faulty?
# Confusion Matrix
#### Objectives
- get and interpret the confusion matrix for classification models
- use classification metrics: precision, recall
- understand the relationships between precision, recall, thresholds, and predicted probabilities
- understand how Precision@K can help make decisions and allocate budgets
#### Install category_encoders
- Local, Anaconda: `conda install -c conda-forge category_encoders`
- Google Colab: `pip install category_encoders`
#### Downgrade Matplotlib? Need version != 3.1.1
Because of this issue: [sns.heatmap top and bottom boxes are cut off](https://github.com/mwaskom/seaborn/issues/1773)
> This was a matplotlib regression introduced in 3.1.1 which has been fixed in 3.1.2 (still forthcoming). For now the fix is to downgrade matplotlib to a prior version.
This _isn't_ required for your homework, but is required to run this notebook. `pip install matplotlib==3.1.0`
```
# !pip install category_encoders matplotlib==3.1.0
```
### Review: Load data, fit model
```
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
def wrangle(X):
"""Wrangles train, validate, and test sets in the same way"""
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Drop duplicate columns
duplicate_columns = ['quantity_group']
X = X.drop(columns=duplicate_columns)
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these like null values
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# When columns have zeros and shouldn't, they are like null values
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
return X
LOCAL = '../data/tanzania/'
WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Tree-Ensembles/master/data/tanzania/'
source = WEB
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(source + 'train_features.csv'),
pd.read_csv(source + 'train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(source + 'test_features.csv')
sample_submission = pd.read_csv(source + 'sample_submission.csv')
# Split train into train & val. Make val the same size as test.
train, val = train_test_split(train, test_size=len(test),
stratify=train['status_group'], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
target = 'status_group'
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
```
## Get and interpret the confusion matrix for classification models
[Scikit-Learn User Guide — Confusion Matrix](https://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix)
```
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_val, y_pred)
cm
sns.heatmap?
y_val.unique()
sns.heatmap(cm, cmap='viridis', annot=True, fmt='d')
columns=[f'Predicted "{c}"' for c in y_val.unique()]
columns
index_names =[f'Actual "{c}"' for c in y_val.unique()]
index_names
df = pd.DataFrame(cm, columns=columns,index=index_names)
sns.heatmap(df,cmap='viridis', annot=True, fmt='d')
cm
cm.sum(axis=1).reshape(3,1)
y_val.nunique()
from sklearn.utils.multiclass import unique_labels
unique_labels(y_val)
y_val.unique()
def plot_confusion_matrix(y_true, y_pred, normalize=False):
columns=[f'Predicted "{c}"' for c in unique_labels(y_true)]
index_names =[f'Actual "{c}"' for c in unique_labels(y_true)]
cm = confusion_matrix(y_val, y_pred)
if normalize:
cm = cm/cm.sum(axis=1).reshape(y_true.nunique(),1)
df = pd.DataFrame(cm, columns=columns,index=index_names)
sns.heatmap(df,cmap='viridis', annot=True, fmt='.2f')
plot_confusion_matrix(y_val, y_pred, normalize=True)
plot_confusion_matrix(y_val, y_pred)
y_val.value_counts()
```
#### How many correct predictions were made?
```
7005 + 332 + 4351
sum(y_val == y_pred)
```
#### How many total predictions were made?
```
len(y_val)
cm.sum()
```
#### What was the classification accuracy?
```
11688/14358
sum(y_val == y_pred)/len(y_val)
```
## Use classification metrics: precision, recall
[Scikit-Learn User Guide — Classification Report](https://scikit-learn.org/stable/modules/model_evaluation.html#classification-report)
```
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
```
#### Wikipedia, [Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)
> Both precision and recall are based on an understanding and measure of relevance.
> Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program's precision is 5/8 while its recall is 5/12.
> High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results.
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/26/Precisionrecall.svg/700px-Precisionrecall.svg.png" width="400">
#### [We can get precision & recall from the confusion matrix](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context))
```
plot_confusion_matrix(y_val, y_pred)
```
#### How many correct predictions of "non functional"?
```
correct_predictions_nonfunctional = 4351
```
#### How many total predictions of "non functional"?
```
total_predictions_nonfunctional = 622 + 156 + 4351
total_predictions_nonfunctional
```
#### What's the precision for "non functional"?
```
correct_predictions_nonfunctional/total_predictions_nonfunctional
```
#### How many actual "non functional" waterpumps?
```
actual_nonfunctional = 1098 + 68 + 4351
actual_nonfunctional
```
#### What's the recall for "non functional"?
```
correct_predictions_nonfunctional/actual_nonfunctional
(0.81+0.58+0.85)/3
print(classification_report(y_val, y_pred))
classification_report(y_val, y_pred, output_dict=True)['non functional']['precision']
```
## Understand the relationships between precision, recall, thresholds, and predicted probabilities. Understand how Precision@K can help make decisions and allocate budgets
### Imagine this scenario...
Suppose there are over 14,000 waterpumps that you _do_ have some information about, but you _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
```
len(test)
```
**You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.** You want to predict, which 2,000 are most likely non-functional or in need of repair, to help you triage and prioritize your waterpump inspections.
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
```
len(train) + len(val)
```
You have historical inspection data for over 59,000 other waterpumps, which you'll use to fit your predictive model.
Based on this historical data, if you randomly chose waterpumps to inspect, then about 46% of the waterpumps would need repairs, and 54% would not need repairs.
```
y_train.value_counts(normalize=True)
```
**Can you do better than random at prioritizing inspections?**
In this scenario, we should define our target differently. We want to identify which waterpumps are non-functional _or_ are functional but needs repair:
```
y_train = y_train != 'functional'
y_val = y_val != 'functional'
y_train.value_counts(normalize=True)
```
We already made our validation set the same size as our test set.
```
len(val) == len(test)
```
We can refit our model, using the redefined target.
Then make predictions for the validation set.
```
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
y_pred_proba = pipeline.predict_proba(X_val)
y_pred_proba
```
And look at the confusion matrix:
```
y_pred_proba[:,1]
confusion_matrix(y_val, y_pred)
ax = sns.distplot(y_pred_proba[:,1], kde=False)
ax.axvline(0.5, color='r')
y_pred = y_pred_proba[:,1] > 0.6
pd.Series(y_pred).value_counts()
```
#### How many total predictions of "True" ("non functional" or "functional needs repair") ?
#### We don't have "budget" to take action on all these predictions
- But we can get predicted probabilities, to rank the predictions.
- Then change the threshold, to change the number of positive predictions, based on our budget.
### Get predicted probabilities and plot the distribution
### Change the threshold
```
from ipywidgets import interact, fixed
y_prob = y_pred_proba[:,1]
def set_threhold(y_true, y_prob, thresh):
y_pred = y_prob > thresh
ax = sns.distplot(y_prob, kde=False)
ax.axvline(thresh, color='r')
plt.show()
plot_confusion_matrix(y_true, y_pred)
print(classification_report(y_true, y_pred))
set_threhold(y_val, y_prob, thresh=0.5)
interact(set_threhold,
y_true=fixed(y_val),
y_prob=fixed(y_prob),
thresh=(0, 1, 0.05))
```
### In this scenario ...
Accuracy _isn't_ the best metric!
Instead, change the threshold, to change the number of positive predictions, based on the budget. (You have the time and resources to go to just 2,000 waterpumps for proactive maintenance.)
Then, evaluate with the precision for "non functional"/"functional needs repair".
This is conceptually like **Precision@K**, where k=2,000.
Read more here: [Recall and Precision at k for Recommender Systems: Detailed Explanation with examples](https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54)
> Precision at k is the proportion of recommended items in the top-k set that are relevant
> Mathematically precision@k is defined as: `Precision@k = (# of recommended items @k that are relevant) / (# of recommended items @k)`
> In the context of recommendation systems we are most likely interested in recommending top-N items to the user. So it makes more sense to compute precision and recall metrics in the first N items instead of all the items. Thus the notion of precision and recall at k where k is a user definable integer that is set by the user to match the top-N recommendations objective.
We asked, can you do better than random at prioritizing inspections?
If we had randomly chosen waterpumps to inspect, we estimate that only 920 waterpumps would be repaired after 2,000 maintenance visits. (46%)
But using our predictive model, in the validation set, we succesfully identified over 1,600 waterpumps in need of repair!
So we will use this predictive model with the dataset of over 14,000 waterpumps that we _do_ have some information about, but we _don't_ know whether they are currently functional, or functional but need repair, or non-functional.
We will predict which 2,000 are most likely non-functional or in need of repair.
We estimate that approximately 1,600 waterpumps will be repaired after these 2,000 maintenance visits.
So we're confident that our predictive model will help triage and prioritize waterpump inspections.
# Assignment
- Read [Maximizing Scarce Maintenance Resources with Data: Applying predictive modeling, precision at k, and clustering to optimize impact](https://towardsdatascience.com/maximizing-scarce-maintenance-resources-with-data-8f3491133050), by Lambda DS3 student Michael Brady. His blog post extends the Tanzania Waterpumps scenario, far beyond what's in this lecture notebook.
If your Kaggle Public Leaderboard score is:
- **Nonexistent**: You need to work on your model and submit predictions
- **< 70%**: You should work on your model and submit predictions
- **70% < score < 80%**: You may want to work on visualizations and write a blog post
- **> 80%**: You should work on visualizations and write a blog post
## Stretch goals — Highly Recommended Links
- Read Google Research's blog post, [Attacking discrimination with smarter machine learning](https://research.google.com/bigpicture/attacking-discrimination-in-ml/), and explore the interactive visualizations. _"A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score."_
- Read the blog post, [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415). You can replicate the code as-is, ["the hard way"](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit). Or you can apply it to the Tanzania Waterpumps data.
- Read this [notebook about how to calculate expected value from a confusion matrix by treating it as a cost-benefit matrix](https://github.com/podopie/DAT18NYC/blob/master/classes/13-expected_value_cost_benefit_analysis.ipynb).
- (Re)read the [Simple guide to confusion matrix terminology](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) and watch the 35 minute video.
|
github_jupyter
|
```
import math
import matplotlib.pyplot as plt
N = 20
a = -1
b = 0.5
w_0 = 0
def func(t): # vv
return math.cos(t)**2 - 0.05
def error(b_i): # vv
ar1 = [math.pow(abs(b), 2) for b in b_i]
s = sum(ar1)
e = math.sqrt(s)
return e
def okno(w_arr, points_arr, t_n, n):
_x_n = sum([w * func(t) for w, t in zip(w_arr[1:], points_arr)]) + w_arr[0]
x_n = func(t_n)
# локальная ошибка прогноза
b_n = x_n - _x_n
w_arr = [w_arr[0]] + [w + (n * b_n * func(t)) for w, t in zip(w_arr[1:], points_arr)]
return w_arr, b_n, _x_n
def obychenie(M, n, p):
w_arr = [0 for x in range(p + 1)]
# математическое ожидание
w_arr[0] = 0.05
# вычислим шаг
shag = (b - a) * 1.0 / (N - 1)
start_i = 0
start_x = a
errors_array = []
epoxa_num = 1
err_result = []
e = 0
while epoxa_num < M:
end_x = start_x + (p - 1) * shag
end_i = start_i + p - 1
if end_x > b:
start_i = 0
start_x = a
e = error(errors_array)
err_result.append(e)
# print(epoxa_num, e,errors_array)
errors_array = []
end_x = start_x + (p - 1) * shag
end_i = start_i + p - 1
epoxa_num += 1
points_arr = [a + (t * shag) for t in range(start_i, end_i + 1)]
w_arr, err, _x_n = okno(w_arr, points_arr, end_x + shag, n)
errors_array.append(err)
start_i += 1
start_x += shag
# print('W:',w_arr)
# print('E:', e) # plt.plot(err_result)
return w_arr, err_result[-1]
def predpologenie(w_arr, p):
shag = (b - a) * 1.0 / (N - 1)
a1 = b - ((p - 1) * shag)
start_i = 0
x_arr = []
right_x_arr = [func(b + x * shag) for x in range(1, N)]
points_arr = [b - x * shag for x in range(p)][::-1] + [b + shag * x for x in range(1, N)]
x_res = [func(t) for t in points_arr[:p]]
b_arr = []
for i in range(p, N + p):
_x_n = sum([w * x for w, x in zip(w_arr[1:], x_res[i - p:i])]) + w_arr[0]
x_n = func(b + shag * (i - p + 1))
b_i = x_n - _x_n
b_arr.append(b_i)
# print(_x_n)
x_res.append(_x_n)
return x_res[p - 1:], right_x_arr, b_arr
M = 2000
n = 0.6
p = 5
w_arr, e = obychenie(M, n, p)
# -----
new_x_arr, right_x_arr, b_arr = predpologenie(w_arr, p)
shag = (b - a) * 1.0 / (N - 1)
old_x_arr = [func(a + x * shag) for x in range(N)]
plt.plot(old_x_arr + right_x_arr)
print('Right:', len(right_x_arr))
print('Wrong:', len(new_x_arr), new_x_arr)
plt.plot([x + len(old_x_arr) for x in range(len(new_x_arr))], new_x_arr, 'ro')
plt.title('Предсказывание при M=4000')
plt.show()
```
|
github_jupyter
|
# Python Collections
* Lists
* Tuples
* Dictionaries
* Sets
## lists
```
x = 10
x = 20
x
x = [10, 20]
x
x = [10, 14.3, 'abc', True]
x
print(dir(x))
l1 = [1, 2, 3]
l2 = [4, 5, 6]
l1 + l2 # concat
l3 = [1, 2, 3, 4, 5, 6]
l3.append(7)
l3
l3.count(2)
l3.count(8)
len(l3)
sum(l3), max(l3), min(l3)
l1
l2
l_sum = [] # l_sum = list()
if len(l1) == len(l2):
for i in range(len(l1)):
l_sum.append(l1[i] + l2[i])
l_sum
zip(l1, l2)
list(zip(l1, l2))
list(zip(l1, l3))
l_sum = [a + b for a,b in zip(l1, l2)]
l_sum
l_sum = [a + b for a,b in zip(l1, l3)]
l_sum
l3
l_sum.extend(l3[len(l_sum):])
l_sum
```
## tuple
tupe is immutable list
```
point = (3, 5)
print(dir(point))
l1[0]
point[0]
```
## comparison in tuples
```
(2, 3) > (1, 7)
(1, 4) > (5, 9)
(1, 10) > (5, 9)
(5, 10) > (5, 9)
(5, 7) > (5, 9)
```
## dictionaries
```
s = [134, 'Ahmed', 'IT']
s[1]
s[2]
# dic = {k:v, k:v, ....}
student = {'id' : 123, 'name': 'Ahmed', 'dept': 'IT'}
student
student['name']
print(dir(student))
student['age']
if 'age' in student:
print(student['age'])
student['age']
student['age'] = 22 # add item
student
student['age'] = 24 # update item
student
student.get('gpa')
print(student.get('gpa'))
student.get('gpa', 0)
student.get('address', 'NA')
student.items()
student.keys()
student.values()
gpa = student.pop('age')
gpa
student
item
student
```
### set
```
set1 = {'a', 'b', 'c'}
print(dir(set1))
set1.add('d')
set1
set1.add('a')
set1
'a' in set1
for e in set1:
print(e)
```
## count word freq
```
text = '''
middletons him says Garden offended do shoud asked or ye but narrow are first knows but going taste by six zealously said weeks come partiality great simplicity mr set By sufficient an blush enquire of Then projection into mean county mile garden with up people should shameless little Started get bed agreement get him as get around mrs wound next was Full might nay going totally four can happy may packages dwelling sent on face newspaper laughing off a one Houses wont on on thing hundred is he it forming humoured Rose at seems but Likewise supposing too poor good from get ye terminated fact when horrible am ye painful for it His good ask valley too wife led offering call myself favour we Sportsman to get remaining By ye on will be Thoughts carriage wondered in end her met about other me time position and his unknown first explained it breakfast are she draw of september keepf she mr china simple sing Nor would be how came Chicken them so an answered cant how or new and mother Total such knew perceived here does him you no Money warmly wholly people dull formerly an simplicity What pianoforte all favourite at wants doubtful incommode delivered Express formerly as uneasy silent am dear saw why put built had weddings for ought ecstatic he to must as forming like no boy understood use pleasure agreeable Felicity mirth had near yet attention at mean decisively need one mirth should denoting have she now juvenile dried an society speaking entreaties ten you am am pianoforte therefor friendship old no whom in many children law drawn eat views The set my lady will him could Inquietude desirous valley terms few Sir things Preferred though pleasant know then those down these means set garret formed in questions though Melancholy pure preserved strictly curiosity otherwise So oh above offices he who reasonably within she no concluded weeks met On like saw relation design for is because are disposed apartments We yet more an want stop Recommend ham believe who it can in appearance valley they melancholy besides remove ought genius up has Am excited Goodness latter directly my agreed questions case why check moment dine got put next he so steepest held again evening doubt wish not village six contented him indeed if Dashwood wholly so something Depending and all over wooded He mrs like nor forming little that so mrs greatest friendly of if having this you joy entire mrs can this really since Collected by Entrance rapid took up Hearts His newspaper tended so right through fat so An body exercise speedily warmth remarkably strongly disposing need in trifling stood led hence assured of in one He out an of had over to begin been really On do to fulfilled just Evil friends in so mrs do on Prepared neither was west if Could come The his finished own being it pretty may Continuing Spite performed half peculiar true begin disposal west Remain barton Nay unsatiable over gay out as new be True you humoured u old money excuse does what once Subjects it you two Can post kept temper Welcomed had not prudent on although there announcing after via right giving has mr simplicity speaking reserved by ask snug rapturous say at so Direct where wrong since matter very in Visited passed by him Polite itself she between thus concealed shy against Written juvenile explained no Ham expense as packages produce today until why way wife Home on joy its said reserved in Hard sake suspected mr mr plan still at an Led ample their no indeed miss or jennings my Her back has an are an jokes its Dejection she ye roof early we true up he said they prevailed real continual merely our no to in but why expense felt less true Rich yesterday Admitting put stronger drawings now the shortly gay wished whole easily fine compliment Answer yet mean am see departure Necessary found feeling Not existence make compact for his oh now sufficient Neglected men hence happening high part Off message inhabiting strangers on do during Unpleasant any Entered advice great he Projecting be mutual bad Our make did i our in pleasure elsewhere wish material become out length uneasy some offending suitable misery dull ecstatic yet accused leave had Oh suitable ecstatic ten are throwing guest he so felicity you how every residence deal besides attacks estimating bred Mrs hearing blessing nay ago than favourable middleton water stronger barton match steepest or or situation Winter much two yet songs me only thanks no though of do Handsome aften hope Own your dependent up Attended her making come ya do Rich Dear
'''
len(text) # num of chars
len(text.split()) # num of words. whitespace is the delimter
words = text.split()
words[:10]
len(text.split('\n')) # num of lines
count_dict = dict()
for word in words:
if word in count_dict:
count_dict[word] += 1
else:
count_dict[word] = 1
#count_dict
sum(count_dict.values())
count_dict = dict()
for word in words:
count_dict[word] = count_dict.get(word, 0) + 1
sum(count_dict.values())
print(sorted(count_dict.items()))
sorted(count_dict.values(), reverse=True)
r_count_dict = [ (v, k) for k,v in count_dict.items()]
sorted(r_count_dict, reverse=True)[:10]
def find_all_indeces(words, keyword):
postions = []
for i in range(len(words)):
if words[i] == keyword:
postions.append(i)
return postions
find_all_indeces(words, 'suitable')
```
|
github_jupyter
|
# Generate and Perform Tiny Performances from the MDRNN
- Generates unconditioned and conditioned output from RoboJam's MDRNN
- Need to open `touchscreen_performance_receiver.pd` in [Pure Data](http://msp.ucsd.edu/software.html) to hear the sound of performances.
- To test generated performances, there need to be example performances in `.csv` format in `../performances`. These aren't included in the repo right now, but might be updated in future.
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# little path hack to get robojam from one directory up in the filesystem.
from context import * # imports robojam
# import robojam # alternatively do this.
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
```
## Plotting Methods
Mainly using "plot_and_perform" method to generate 2D and 3D plots.
```
input_colour = 'darkblue'
gen_colour = 'firebrick'
plt.style.use('seaborn-talk')
osc_client = robojam.TouchScreenOscClient()
def plot_2D(perf_df, name="foo", saving=False):
"""Plot in 2D"""
## Plot the performance
swipes = divide_performance_into_swipes(perf_df)
plt.figure(figsize=(8, 8))
for swipe in swipes:
p = plt.plot(swipe.x, swipe.y, 'o-')
plt.setp(p, color=gen_colour, linewidth=5.0)
plt.ylim(1.0,0)
plt.xlim(0,1.0)
plt.xticks([])
plt.yticks([])
if saving:
plt.savefig(name+".png", bbox_inches='tight')
plt.close()
else:
plt.show()
def plot_double_2d(perf1, perf2, name="foo", saving=False):
"""Plot two performances in 2D"""
plt.figure(figsize=(8, 8))
swipes = divide_performance_into_swipes(perf1)
for swipe in swipes:
p = plt.plot(swipe.x, swipe.y, 'o-')
plt.setp(p, color=input_colour, linewidth=5.0)
swipes = divide_performance_into_swipes(perf2)
for swipe in swipes:
p = plt.plot(swipe.x, swipe.y, 'o-')
plt.setp(p, color=gen_colour, linewidth=5.0)
plt.ylim(1.0,0)
plt.xlim(0,1.0)
plt.xticks([])
plt.yticks([])
if saving:
plt.savefig(name+".png", bbox_inches='tight')
plt.close()
else:
plt.show()
def plot_3D(perf_df, name="foo", saving=False):
"""Plot in 3D"""
## Plot in 3D
swipes = divide_performance_into_swipes(perf_df)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for swipe in swipes:
p = ax.plot(list(swipe.index), list(swipe.x), list(swipe.y), 'o-')
plt.setp(p, color=gen_colour, linewidth=5.0)
ax.set_ylim(0,1.0)
ax.set_zlim(1.0,0)
ax.set_xlabel('time (s)')
ax.set_ylabel('x')
ax.set_zlabel('y')
if saving:
plt.savefig(name+".png", bbox_inches='tight')
plt.close()
else:
plt.show()
def plot_double_3d(perf1, perf2, name="foo", saving=False):
"""Plot two performances in 3D"""
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
swipes = divide_performance_into_swipes(perf1)
for swipe in swipes:
p = ax.plot(list(swipe.index), list(swipe.x), list(swipe.y), 'o-')
plt.setp(p, color=input_colour, linewidth=5.0)
swipes = divide_performance_into_swipes(perf2)
for swipe in swipes:
p = ax.plot(list(swipe.index), list(swipe.x), list(swipe.y), 'o-')
plt.setp(p, color=gen_colour, linewidth=5.0)
ax.set_ylim(0,1.0)
ax.set_zlim(1.0,0)
ax.set_xlabel('time (s)')
ax.set_ylabel('x')
ax.set_zlabel('y')
if saving:
plt.savefig(name+".png", bbox_inches='tight')
plt.close()
else:
plt.show()
def plot_and_perform_sequentially(perf1, perf2, perform=True):
total = np.append(perf1, perf2, axis=0)
total = total.T
perf1 = perf1.T
perf2 = perf2.T
perf1_df = pd.DataFrame({'x':perf1[0], 'y':perf1[1], 't':perf1[2]})
perf2_df = pd.DataFrame({'x':perf2[0], 'y':perf2[1], 't':perf2[2]})
total_df = pd.DataFrame({'x':total[0], 'y':total[1], 't':total[2]})
perf1_df['time'] = perf1_df.t.cumsum()
total_perf1_time = perf1_df.t.sum()
perf2_df['time'] = perf2_df.t.cumsum() + total_perf1_time
total_df['time'] = total_df.t.cumsum()
## Plot the performances
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot(perf1_df.time, perf1_df.x, perf1_df.y, '.b-')
ax.plot(perf2_df.time, perf2_df.x, perf2_df.y, '.r-')
plt.show()
if perform:
osc_client.playPerformance(total_df)
def divide_performance_into_swipes(perf_df):
"""Divides a performance into a sequence of swipe dataframes."""
touch_starts = perf_df[perf_df.moving == 0].index
performance_swipes = []
remainder = perf_df
for att in touch_starts:
swipe = remainder.iloc[remainder.index < att]
performance_swipes.append(swipe)
remainder = remainder.iloc[remainder.index >= att]
performance_swipes.append(remainder)
return performance_swipes
```
## Generate and play a performance
Performances are generated using the `generate_random_tiny_performance` method which is set to produce performances up to 5 seconds. The LSTM state and first touch can optionally be kept from the last evaluation or re-initialised.
This block can be run multiple times to generate more performances.
```
# Generate and play one unconditioned performance
# Hyperparameters:
HIDDEN_UNITS = 512
LAYERS = 3
MIXES = 16
# Network
net = robojam.MixtureRNN(mode=robojam.NET_MODE_RUN, n_hidden_units=HIDDEN_UNITS, n_mixtures=MIXES, batch_size=1, sequence_length=1, n_layers=LAYERS)
osc_client.setSynth(instrument = "chirp")
model_file = "../models/mdrnn-2d-1d-3layers-512units-16mixtures"
TEMPERATURE = 1.00
# Generate
perf = robojam.generate_random_tiny_performance(net, np.array([0.5, 0.5, 0.1]), time_limit=5.0, temp=TEMPERATURE, model_file=model_file)
# Plot and perform.
perf_df = robojam.perf_array_to_df(perf)
plot_2D(perf_df, saving=False)
plot_3D(perf_df, saving=False)
osc_client.playPerformance(perf_df)
## Generate a number of unconditioned performances
NUMBER = 10
# Hyperparameters:
HIDDEN_UNITS = 512
LAYERS = 3
MIXES = 16
net = robojam.MixtureRNN(mode=robojam.NET_MODE_RUN, n_hidden_units=HIDDEN_UNITS, n_mixtures=MIXES, batch_size=1, sequence_length=1, n_layers=LAYERS)
# Setup synth for performance
osc_client.setSynth(instrument = "chirp")
model_file = "../models/mdrnn-2d-1d-3layers-512units-16mixtures"
TEMPERATURE = 1.00
for i in range(NUMBER):
name = "touchperf-uncond-" + str(i)
net.state = None # reset state if needed.
perf = robojam.generate_random_tiny_performance(net, np.array([0.5, 0.5, 0.1]), time_limit=5.0, temp=TEMPERATURE, model_file=model_file)
perf_df = robojam.perf_array_to_df(perf)
plot_2D(perf_df, name=name, saving=True)
```
# Condition and Generate
Conditions the MDRNN on a random touchscreen performance, then generates a 5 second response.
This requires example performances (`.csv` format) to be in `../performances`.
See `TinyPerformanceLoader` for more details.
```
# Load the sample touchscreen performances:
loader = robojam.TinyPerformanceLoader(verbose=False)
# Fails if example performances are not in ../performance
# Generate and play one conditioned performance
# Hyperparameters:
HIDDEN_UNITS = 512
LAYERS = 3
MIXES = 16
net = robojam.MixtureRNN(mode=robojam.NET_MODE_RUN, n_hidden_units=HIDDEN_UNITS, n_mixtures=MIXES, batch_size=1, sequence_length=1, n_layers=LAYERS)
# Setup synth for performance
osc_client.setSynth(instrument = "chirp")
model_file = "../models/mdrnn-2d-1d-3layers-512units-16mixtures"
TEMPERATURE = 1.00
in_df = loader.sample_without_replacement(n=1)[0]
in_array = robojam.perf_df_to_array(in_df)
output_perf = robojam.condition_and_generate(net, in_array, time_limit=5.0, temp=TEMPERATURE, model_file=model_file)
out_df = robojam.perf_array_to_df(output_perf)
# Plot and perform
plot_double_2d(in_df, out_df)
plot_double_3d(in_df, out_df)
# just perform the output...
osc_client.playPerformance(out_df)
# TODO: implement polyphonic playback. Somehow.
# Generate a number of conditioned performances.
NUMBER = 10
# Hyperparameters:
HIDDEN_UNITS = 512
LAYERS = 3
MIXES = 16
net = robojam.MixtureRNN(mode=robojam.NET_MODE_RUN, n_hidden_units=HIDDEN_UNITS, n_mixtures=MIXES, batch_size=1, sequence_length=1, n_layers=LAYERS)
# Setup synth for performance
osc_client.setSynth(instrument = "chirp")
model_file = "../models/mdrnn-2d-1d-3layers-512units-16mixtures"
TEMPERATURE = 1.00
# make the plots
input_perf_dfs = loader.sample_without_replacement(n=NUMBER)
for i, in_df in enumerate(input_perf_dfs):
title = "touchperf-cond-" + str(i)
in_array = robojam.perf_df_to_array(in_df)
in_time = in_array.T[2].sum()
print("In Time:", in_time)
output_perf = robojam.condition_and_generate(net, in_array, time_limit=5.0, temp=TEMPERATURE, model_file=model_file)
out_df = robojam.perf_array_to_df(output_perf)
print("Out Time:", output_perf.T[2].sum())
plot_double_2d(in_df, out_df, name=title, saving=True)
```
|
github_jupyter
|
*Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).*
Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).
```
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
```
- Runs on CPU (not recommended here) or GPU (if available)
# Model Zoo -- Convolutional Neural Network (VGG19 Architecture)
Implementation of the VGG-19 architecture on Cifar10.
Reference for VGG-19:
- Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
The following table (taken from Simonyan & Zisserman referenced above) summarizes the VGG19 architecture:

## Imports
```
import numpy as np
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
```
## Settings and Dataset
```
##########################
### SETTINGS
##########################
# Device
DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('Device:', DEVICE)
# Hyperparameters
random_seed = 1
learning_rate = 0.001
num_epochs = 20
batch_size = 128
# Architecture
num_features = 784
num_classes = 10
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.CIFAR10(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.CIFAR10(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
```
## Model
```
##########################
### MODEL
##########################
class VGG16(torch.nn.Module):
def __init__(self, num_features, num_classes):
super(VGG16, self).__init__()
# calculate same padding:
# (w - k + 2*p)/s + 1 = o
# => p = (s(o-1) - w + k)/2
self.block_1 = nn.Sequential(
nn.Conv2d(in_channels=3,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
# (1(32-1)- 32 + 3)/2 = 1
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=64,
out_channels=64,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_2 = nn.Sequential(
nn.Conv2d(in_channels=64,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=128,
out_channels=128,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_3 = nn.Sequential(
nn.Conv2d(in_channels=128,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=256,
out_channels=256,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_4 = nn.Sequential(
nn.Conv2d(in_channels=256,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.block_5 = nn.Sequential(
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=512,
out_channels=512,
kernel_size=(3, 3),
stride=(1, 1),
padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(2, 2),
stride=(2, 2))
)
self.classifier = nn.Sequential(
nn.Linear(512, 4096),
nn.ReLU(True),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Linear(4096, num_classes)
)
for m in self.modules():
if isinstance(m, torch.nn.Conv2d):
#n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
#m.weight.data.normal_(0, np.sqrt(2. / n))
m.weight.detach().normal_(0, 0.05)
if m.bias is not None:
m.bias.detach().zero_()
elif isinstance(m, torch.nn.Linear):
m.weight.detach().normal_(0, 0.05)
m.bias.detach().detach().zero_()
def forward(self, x):
x = self.block_1(x)
x = self.block_2(x)
x = self.block_3(x)
x = self.block_4(x)
x = self.block_5(x)
logits = self.classifier(x.view(-1, 512))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = VGG16(num_features=num_features,
num_classes=num_classes)
model = model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
```
## Training
```
def compute_accuracy(model, data_loader):
model.eval()
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(DEVICE)
targets = targets.to(DEVICE)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
def compute_epoch_loss(model, data_loader):
model.eval()
curr_loss, num_examples = 0., 0
with torch.no_grad():
for features, targets in data_loader:
features = features.to(DEVICE)
targets = targets.to(DEVICE)
logits, probas = model(features)
loss = F.cross_entropy(logits, targets, reduction='sum')
num_examples += targets.size(0)
curr_loss += loss
curr_loss = curr_loss / num_examples
return curr_loss
start_time = time.time()
for epoch in range(num_epochs):
model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(DEVICE)
targets = targets.to(DEVICE)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %04d/%04d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
model.eval()
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d | Train: %.3f%% | Loss: %.3f' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader),
compute_epoch_loss(model, train_loader)))
print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
```
## Evaluation
```
with torch.set_grad_enabled(False): # save memory during inference
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
%watermark -iv
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/modichirag/flowpm/blob/master/notebooks/flowpm_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%pylab inline
from flowpm import linear_field, lpt_init, nbody, cic_paint
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
from scipy.interpolate import InterpolatedUnivariateSpline as iuspline
klin = np.loadtxt('../flowpm/data/Planck15_a1p00.txt').T[0]
plin = np.loadtxt('../flowpm/data/Planck15_a1p00.txt').T[1]
ipklin = iuspline(klin, plin)
import flowpm
stages = np.linspace(0.1, 1.0, 10, endpoint=True)
initial_conditions = flowpm.linear_field(128, # size of the cube
100, # Physical size of the cube
ipklin, # Initial powerspectrum
batch_size=1)
# Sample particles
state = flowpm.lpt_init(initial_conditions, a0=0.1)
# Evolve particles down to z=0
final_state = flowpm.nbody(state, stages, 128)
# Retrieve final density field
final_field = flowpm.cic_paint(tf.zeros_like(initial_conditions), final_state[0])
with tf.Session() as sess:
sim = sess.run(final_field)
imshow(sim[0].sum(axis=0))
def _binomial_kernel(num_channels, dtype=tf.float32):
"""Creates a 5x5x5 b-spline kernel.
Args:
num_channels: The number of channels of the image to filter.
dtype: The type of an element in the kernel.
Returns:
A tensor of shape `[5, 5, 5, num_channels, num_channels]`.
"""
kernel = np.array((1., 4., 6., 4., 1.), dtype=dtype.as_numpy_dtype())
kernel = np.einsum('ij,k->ijk', np.outer(kernel, kernel), kernel)
kernel /= np.sum(kernel)
kernel = kernel[:, :, :, np.newaxis, np.newaxis]
return tf.constant(kernel, dtype=dtype) * tf.eye(num_channels, dtype=dtype)
def _downsample(cube, kernel):
"""Downsamples the image using a convolution with stride 2.
"""
return tf.nn.conv3d(
input=cube, filters=kernel, strides=[1, 2, 2, 2, 1], padding="SAME")
def _upsample(cube, kernel, output_shape=None):
"""Upsamples the image using a transposed convolution with stride 2.
"""
if output_shape is None:
output_shape = tf.shape(input=cube)
output_shape = (output_shape[0], output_shape[1] * 2, output_shape[2] * 2,
output_shape[3] * 2, output_shape[4])
return tf.nn.conv3d_transpose(
cube,
kernel * 2.0**3,
output_shape=output_shape,
strides=[1, 2, 2, 2, 1],
padding="SAME")
def _build_pyramid(cube, sampler, num_levels):
"""Creates the different levels of the pyramid.
"""
kernel = _binomial_kernel(1, dtype=cube.dtype)
levels = [cube]
for _ in range(num_levels):
cube = sampler(cube, kernel)
levels.append(cube)
return levels
def _split(cube, kernel):
"""Splits the image into high and low frequencies.
This is achieved by smoothing the input image and substracting the smoothed
version from the input.
"""
low = _downsample(cube, kernel)
high = cube - _upsample(low, kernel, tf.shape(input=cube))
return high, low
def downsample(cube, num_levels, name=None):
"""Generates the different levels of the pyramid (downsampling).
"""
with tf.name_scope(name, "pyramid_downsample", [cube]):
cube = tf.convert_to_tensor(value=cube)
return _build_pyramid(cube, _downsample, num_levels)
def merge(levels, name=None):
"""Merges the different levels of the pyramid back to an image.
"""
with tf.name_scope(name, "pyramid_merge", levels):
levels = [tf.convert_to_tensor(value=level) for level in levels]
cube = levels[-1]
kernel = _binomial_kernel(tf.shape(input=cube)[-1], dtype=cube.dtype)
for level in reversed(levels[:-1]):
cube = _upsample(cube, kernel, tf.shape(input=level)) + level
return cube
def split(cube, num_levels, name=None):
"""Generates the different levels of the pyramid.
"""
with tf.name_scope(name, "pyramid_split", [cube]):
cube = tf.convert_to_tensor(value=cube)
kernel = _binomial_kernel(tf.shape(input=cube)[-1], dtype=cube.dtype)
low = cube
levels = []
for _ in range(num_levels):
high, low = _split(low, kernel)
levels.append(high)
levels.append(low)
return levels
def upsample(cube, num_levels, name=None):
"""Generates the different levels of the pyramid (upsampling).
"""
with tf.name_scope(name, "pyramid_upsample", [cube]):
cube = tf.convert_to_tensor(value=cube)
return _build_pyramid(cube, _upsample, num_levels)
field = tf.expand_dims(final_field, -1)
# Split field into short range and large scale components
levels = split(field, 1)
levels
# Compute forces on both fields
def force(field):
shape = field.get_shape()
batch_size, nc = shape[1], shape[2].value
kfield = flowpm.utils.r2c3d(field)
kvec = flowpm.kernels.fftk((nc, nc, nc), symmetric=False)
lap = tf.cast(flowpm.kernels.laplace_kernel(kvec), tf.complex64)
fknlrange = flowpm.kernels.longrange_kernel(kvec, 0)
kweight = lap * fknlrange
pot_k = tf.multiply(kfield, kweight)
f = []
for d in range(3):
force_dc = tf.multiply(pot_k, flowpm.kernels.gradient_kernel(kvec, d))
forced = flowpm.utils.c2r3d(force_dc)
f.append(forced)
return tf.stack(f, axis=-1)
force_levels = [force(levels[0][...,0]), force(levels[1][...,0])*2]
force_levels
rec = merge(force_levels)
rec
# Direct force computation on input field
dforce = force(field[...,0])
with tf.Session() as sess:
sim, l0, l1, r, df = sess.run([final_field, force_levels[0], force_levels[1], rec, dforce])
figure(figsize=(15,5))
subplot(131)
imshow(sim[0].sum(axis=1))
title('Input')
subplot(132)
imshow(l0[0].sum(axis=1)[...,0])
title('short range forces')
subplot(133)
imshow(l1[0].sum(axis=1)[...,0]);
title('l2')
title('long range forces')
figure(figsize=(15,5))
subplot(131)
imshow(r[0].sum(axis=1)[...,0]);
title('Multi-Grid Force Computation')
subplot(132)
imshow(df[0].sum(axis=1)[...,0]);
title('Direct Force Computation')
subplot(133)
imshow((r - df)[0,8:-8,8:-8,8:-8].sum(axis=1)[...,0]);
title('Residuals');
levels = split(field, 4)
rec = merge(levels)
with tf.Session() as sess:
sim, l0, l1, l2, l3, r = sess.run([final_field, levels[0], levels[1], levels[2], levels[3], rec[...,0]])
figure(figsize=(25,10))
subplot(151)
imshow(sim[0].sum(axis=0))
title('Input')
subplot(152)
imshow(l0[0].sum(axis=0)[...,0])
title('l1')
subplot(153)
imshow(l1[0].sum(axis=0)[...,0]);
title('l2')
subplot(154)
imshow(l2[0].sum(axis=0)[...,0]);
title('l2')
subplot(155)
imshow(l3[0].sum(axis=0)[...,0]);
title('approximation')
figure(figsize=(25,10))
subplot(131)
imshow(sim[0].sum(axis=0))
title('Input')
subplot(132)
imshow(r[0].sum(axis=0))
title('Reconstruction')
subplot(133)
imshow((sim - r)[0].sum(axis=0));
title('Difference')
```
|
github_jupyter
|
# When To Stop Fuzzing
In the past chapters, we have discussed several fuzzing techniques. Knowing _what_ to do is important, but it is also important to know when to _stop_ doing things. In this chapter, we will learn when to _stop fuzzing_ – and use a prominent example for this purpose: The *Enigma* machine that was used in the second world war by the navy of Nazi Germany to encrypt communications, and how Alan Turing and I.J. Good used _fuzzing techniques_ to crack ciphers for the Naval Enigma machine.
Turing did not only develop the foundations of computer science, the Turing machine. Together with his assistant I.J. Good, he also invented estimators of the probability of an event occuring that has never previously occured. We show how the Good-Turing estimator can be used to quantify the *residual risk* of a fuzzing campaign that finds no vulnerabilities. Meaning, we show how it estimates the probability of discovering a vulnerability when no vulnerability has been observed before throughout the fuzzing campaign.
We discuss means to speed up [coverage-based fuzzers](Coverage.ipynb) and introduce a range of estimation and extrapolation methodologies to assess and extrapolate fuzzing progress and residual risk.
**Prerequisites**
* _The chapter on [Coverage](Coverage.ipynb) discusses how to use coverage information for an executed test input to guide a coverage-based mutational greybox fuzzer_.
* Some knowledge of statistics is helpful.
```
import fuzzingbook_utils
import Fuzzer
import Coverage
```
## The Enigma Machine
It is autumn in the year of 1938. Turing has just finished his PhD at Princeton University demonstrating the limits of computation and laying the foundation for the theory of computer science. Nazi Germany is rearming. It has reoccupied the Rhineland and annexed Austria against the treaty of Versailles. It has just annexed the Sudetenland in Czechoslovakia and begins preparations to take over the rest of Czechoslovakia despite an agreement just signed in Munich.
Meanwhile, the British intelligence is building up their capability to break encrypted messages used by the Germans to communicate military and naval information. The Germans are using [Enigma machines](https://en.wikipedia.org/wiki/Enigma_machine) for encryption. Enigma machines use a series of electro-mechanical rotor cipher machines to protect military communication. Here is a picture of an Enigma machine:

By the time Turing joined the British Bletchley park, the Polish intelligence reverse engineered the logical structure of the Enigma machine and built a decryption machine called *Bomba* (perhaps because of the ticking noise they made). A bomba simulates six Enigma machines simultaneously and tries different decryption keys until the code is broken. The Polish bomba might have been the very _first fuzzer_.
Turing took it upon himself to crack ciphers of the Naval Enigma machine, which were notoriously hard to crack. The Naval Enigma used, as part of its encryption key, a three letter sequence called *trigram*. These trigrams were selected from a book, called *Kenngruppenbuch*, which contained all trigrams in a random order.
### The Kenngruppenbuch
Let's start with the Kenngruppenbuch (K-Book).
We are going to use the following Python functions.
* `shuffle(elements)` - shuffle *elements* and put items in random order.
* `choice(elements, p=weights)` - choose an item from *elements* at random. An element with twice the *weight* is twice as likely to be chosen.
* `log(a)` - returns the natural logarithm of a.
* `a ** b` - is the a to the power of b (a.k.a. [power operator](https://docs.python.org/3/reference/expressions.html#the-power-operator))
```
import string
import numpy
from numpy.random import choice
from numpy.random import shuffle
from numpy import log
```
We start with creating the set of trigrams:
```
letters = list(string.ascii_letters[26:]) # upper-case characters
trigrams = [str(a + b + c) for a in letters for b in letters for c in letters]
shuffle(trigrams)
trigrams[:10]
```
These now go into the Kenngruppenbuch. However, it was observed that some trigrams were more likely chosen than others. For instance, trigrams at the top-left corner of any page, or trigrams on the first or last few pages were more likely than one somewhere in the middle of the book or page. We reflect this difference in distribution by assigning a _probability_ to each trigram, using Benford's law as introduced in [Probabilistic Fuzzing](ProbabilisticGrammarFuzzer.ipynb).
Recall, that Benford's law assigns the $i$-th digit the probability $\log_{10}\left(1 + \frac{1}{i}\right)$ where the base 10 is chosen because there are 10 digits $i\in [0,9]$. However, Benford's law works for an arbitrary number of "digits". Hence, we assign the $i$-th trigram the probability $\log_b\left(1 + \frac{1}{i}\right)$ where the base $b$ is the number of all possible trigrams $b=26^3$.
```
k_book = {} # Kenngruppenbuch
for i in range(1, len(trigrams) + 1):
trigram = trigrams[i - 1]
# choose weights according to Benford's law
k_book[trigram] = log(1 + 1 / i) / log(26**3 + 1)
```
Here's a random trigram from the Kenngruppenbuch:
```
random_trigram = choice(list(k_book.keys()), p=list(k_book.values()))
random_trigram
```
And this is its probability:
```
k_book[random_trigram]
```
### Fuzzing the Enigma
In the following, we introduce an extremely simplified implementation of the Naval Enigma based on the trigrams from the K-book. Of course, the encryption mechanism of the actual Enigma machine is much more sophisticated and worthy of a much more detailed investigation. We encourage the interested reader to follow up with further reading listed in the Background section.
The personell at Bletchley Park can only check whether an encoded message is encoded with a (guessed) trigram.
Our implementation `naval_enigma()` takes a `message` and a `key` (i.e., the guessed trigram). If the given key matches the (previously computed) key for the message, `naval_enigma()` returns `True`.
```
from Fuzzer import RandomFuzzer
from Fuzzer import Runner
class EnigmaMachine(Runner):
def __init__(self, k_book):
self.k_book = k_book
self.reset()
def reset(self):
"""Resets the key register"""
self.msg2key = {}
def internal_msg2key(self, message):
"""Internal helper method.
Returns the trigram for an encoded message."""
if not message in self.msg2key:
# Simulating how an officer chooses a key from the Kenngruppenbuch to encode the message.
self.msg2key[message] = choice(list(self.k_book.keys()), p=list(self.k_book.values()))
trigram = self.msg2key[message]
return trigram
def naval_enigma(self, message, key):
"""Returns true if 'message' is encoded with 'key'"""
if key == self.internal_msg2key(message):
return True
else:
return False
```
To "fuzz" the `naval_enigma()`, our job will be to come up with a key that matches a given (encrypted) message. Since the keys only have three characters, we have a good chance to achieve this in much less than a seconds. (Of course, longer keys will be much harder to find via random fuzzing.)
```
class EnigmaMachine(EnigmaMachine):
def run(self, tri):
"""PASS if cur_msg is encoded with trigram tri"""
if self.naval_enigma(self.cur_msg, tri):
outcome = self.PASS
else:
outcome = self.FAIL
return (tri, outcome)
```
Now we can use the `EnigmaMachine` to check whether a certain message is encoded with a certain trigram.
```
enigma = EnigmaMachine(k_book)
enigma.cur_msg = "BrEaK mE. L0Lzz"
enigma.run("AAA")
```
The simplest way to crack an encoded message is by brute forcing. Suppose, at Bletchley park they would try random trigrams until a message is broken.
```
class BletchleyPark(object):
def __init__(self, enigma):
self.enigma = enigma
self.enigma.reset()
self.enigma_fuzzer = RandomFuzzer(
min_length=3,
max_length=3,
char_start=65,
char_range=26)
def break_message(self, message):
"""Returning the trigram for an encoded message"""
self.enigma.cur_msg = message
while True:
(trigram, outcome) = self.enigma_fuzzer.run(self.enigma)
if outcome == self.enigma.PASS:
break
return trigram
```
How long does it take Bletchley park to find the key using this brute forcing approach?
```
from Timer import Timer
enigma = EnigmaMachine(k_book)
bletchley = BletchleyPark(enigma)
with Timer() as t:
trigram = bletchley.break_message("BrEaK mE. L0Lzz")
```
Here's the key for the current message:
```
trigram
```
And no, this did not take long:
```
'%f seconds' % t.elapsed_time()
'Bletchley cracks about %d messages per second' % (1/t.elapsed_time())
```
### Turing's Observations
Okay, lets crack a few messages and count the number of times each trigram is observed.
```
from collections import defaultdict
n = 100 # messages to crack
observed = defaultdict(int)
for msg in range(0, n):
trigram = bletchley.break_message(msg)
observed[trigram] += 1
# list of trigrams that have been observed
counts = [k for k, v in observed.items() if int(v) > 0]
t_trigrams = len(k_book)
o_trigrams = len(counts)
"After cracking %d messages, we observed %d out of %d trigrams." % (
n, o_trigrams, t_trigrams)
singletons = len([k for k, v in observed.items() if int(v) == 1])
"From the %d observed trigrams, %d were observed only once." % (
o_trigrams, singletons)
```
Given a sample of previously used entries, Turing wanted to _estimate the likelihood_ that the current unknown entry was one that had been previously used, and further, to estimate the probability distribution over the previously used entries. This lead to the development of the estimators of the missing mass and estimates of the true probability mass of the set of items occuring in the sample. Good worked with Turing during the war and, with Turing’s permission, published the analysis of the bias of these estimators in 1953.
Suppose, after finding the keys for n=100 messages, we have observed the trigram "ABC" exactly $X_\text{ABC}=10$ times. What is the probability $p_\text{ABC}$ that "ABC" is the key for the next message? Empirically, we would estimate $\hat p_\text{ABC}=\frac{X_\text{ABC}}{n}=0.1$. We can derive the empirical estimates for all other trigrams that we have observed. However, it becomes quickly evident that the complete probability mass is distributed over the *observed* trigrams. This leaves no mass for *unobserved* trigrams, i.e., the probability of discovering a new trigram. This is called the missing probability mass or the discovery probability.
Turing and Good derived an estimate of the *discovery probability* $p_0$, i.e., the probability to discover an unobserved trigram, as the number $f_1$ of trigrams observed exactly once divided by the total number $n$ of messages cracked:
$$
p_0 = \frac{f_1}{n}
$$
where $f_1$ is the number of singletons and $n$ is the number of cracked messages.
Lets explore this idea for a bit. We'll extend `BletchleyPark` to crack `n` messages and record the number of trigrams observed as the number of cracked messages increases.
```
class BletchleyPark(BletchleyPark):
def break_message(self, message):
"""Returning the trigram for an encoded message"""
# For the following experiment, we want to make it practical
# to break a large number of messages. So, we remove the
# loop and just return the trigram for a message.
#
# enigma.cur_msg = message
# while True:
# (trigram, outcome) = self.enigma_fuzzer.run(self.enigma)
# if outcome == self.enigma.PASS:
# break
trigram = enigma.internal_msg2key(message)
return trigram
def break_n_messages(self, n):
"""Returns how often each trigram has been observed,
and #trigrams discovered for each message."""
observed = defaultdict(int)
timeseries = [0] * n
# Crack n messages and record #trigrams observed as #messages increases
cur_observed = 0
for cur_msg in range(0, n):
trigram = self.break_message(cur_msg)
observed[trigram] += 1
if (observed[trigram] == 1):
cur_observed += 1
timeseries[cur_msg] = cur_observed
return (observed, timeseries)
```
Let's crack 2000 messages and compute the GT-estimate.
```
n = 2000 # messages to crack
bletchley = BletchleyPark(enigma)
(observed, timeseries) = bletchley.break_n_messages(n)
```
Let us determine the Good-Turing estimate of the probability that the next trigram has not been observed before:
```
singletons = len([k for k, v in observed.items() if int(v) == 1])
gt = singletons / n
gt
```
We can verify the Good-Turing estimate empirically and compute the empirically determined probability that the next trigram has not been observed before. To do this, we repeat the following experiment repeats=1000 times, reporting the average: If the next message is a new trigram, return 1, otherwise return 0. Note that here, we do not record the newly discovered trigrams as observed.
```
repeats = 1000 # experiment repetitions
newly_discovered = 0
for cur_msg in range(n, n + repeats):
trigram = bletchley.break_message(cur_msg)
if(observed[trigram] == 0):
newly_discovered += 1
newly_discovered / repeats
```
Looks pretty accurate, huh? The difference between estimates is reasonably small, probably below 0.03. However, the Good-Turing estimate did not nearly require as much computational resources as the empirical estimate. Unlike the empirical estimate, the Good-Turing estimate can be computed during the campaign. Unlike the empirical estimate, the Good-Turing estimate requires no additional, redundant repetitions.
In fact, the Good-Turing (GT) estimator often performs close to the best estimator for arbitrary distributions ([Try it here!](#Kenngruppenbuch)). Of course, the concept of *discovery* is not limited to trigrams. The GT estimator is also used in the study of natural languages to estimate the likelihood that we haven't ever heard or read the word we next encounter. The GT estimator is used in ecology to estimate the likelihood of discovering a new, unseen species in our quest to catalog all _species_ on earth. Later, we will see how it can be used to estimate the probability to discover a vulnerability when none has been observed, yet (i.e., residual risk).
Alan Turing was interested in the _complement_ $(1-GT)$ which gives the proportion of _all_ messages for which the Brits have already observed the trigram needed for decryption. For this reason, the complement is also called sample coverage. The *sample coverage* quantifies how much we know about decryption of all messages given the few messages we have already decrypted.
The probability that the next message can be decrypted with a previously discovered trigram is:
```
1 - gt
```
The *inverse* of the GT-estimate (1/GT) is a _maximum likelihood estimate_ of the expected number of messages that we can decrypt with previously observed trigrams before having to find a new trigram to decrypt the message. In our setting, the number of messages for which we can expect to reuse previous trigrams before having to discover a new trigram is:
```
1 / gt
```
But why is GT so accurate? Intuitively, despite a large sampling effort (i.e., cracking $n$ messages), there are still $f_1$ trigrams that have been observed only once. We could say that such "singletons" are very rare trigrams. Hence, the probability that the next messages is encoded with such a rare but observed trigram gives a good upper bound on the probability that the next message is encoded with an evidently much rarer, unobserved trigram. Since Turing's observation 80 years ago, an entire statistical theory has been developed around the hypothesis that rare, observed "species" are good predictors of unobserved species.
Let's have a look at the distribution of rare trigrams.
```
%matplotlib inline
import matplotlib.pyplot as plt
frequencies = [v for k, v in observed.items() if int(v) > 0]
frequencies.sort(reverse=True)
# Uncomment to see how often each discovered trigram has been observed
# print(frequencies)
# frequency of rare trigrams
plt.figure(num=None, figsize=(12, 4), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
plt.hist(frequencies, range=[1, 21], bins=numpy.arange(1, 21) - 0.5)
plt.xticks(range(1, 21))
plt.xlabel('# of occurances (e.g., 1 represents singleton trigrams)')
plt.ylabel('Frequency of occurances')
plt.title('Figure 1. Frequency of Rare Trigrams')
# trigram discovery over time
plt.subplot(1, 2, 2)
plt.plot(timeseries)
plt.xlabel('# of messages cracked')
plt.ylabel('# of trigrams discovered')
plt.title('Figure 2. Trigram Discovery Over Time');
# Statistics for most and least often observed trigrams
singletons = len([v for k, v in observed.items() if int(v) == 1])
total = len(frequencies)
print("%3d of %3d trigrams (%.3f%%) have been observed 1 time (i.e., are singleton trigrams)."
% (singletons, total, singletons * 100 / total))
print("%3d of %3d trigrams ( %.3f%%) have been observed %d times."
% (1, total, 1 / total, frequencies[0]))
```
The *majority of trigrams* have been observed only once, as we can see in Figure 1 (left). In other words, a the majority of observed trigrams are "rare" singletons. In Figure 2 (right), we can see that discovery is in full swing. The trajectory seems almost linear. However, since there is a finite number of trigrams (26^3 = 17,576) trigram discovery will slow down and eventually approach an asymptote (the total number of trigrams).
### Boosting the Performance of BletchleyPark
Some trigrams have been observed very often. We call these "abundant" trigrams.
```
print("Trigram : Frequency")
for trigram in sorted(observed, key=observed.get, reverse=True):
if observed[trigram] > 10:
print(" %s : %d" % (trigram, observed[trigram]))
```
We'll speed up the code breaking by _trying the abundant trigrams first_.
First, we'll find out how many messages can be cracked by the existing brute forcing strategy at Bledgley park, given a maximum number of attempts. We'll also track the number of messages cracked over time (`timeseries`).
```
class BletchleyPark(BletchleyPark):
def __init__(self, enigma):
super().__init__(enigma)
self.cur_attempts = 0
self.cur_observed = 0
self.observed = defaultdict(int)
self.timeseries = [None] * max_attempts * 2
def break_message(self, message):
"""Returns the trigram for an encoded message, and
track #trigrams observed as #attempts increases."""
self.enigma.cur_msg = message
while True:
self.cur_attempts += 1 # NEW
(trigram, outcome) = self.enigma_fuzzer.run(self.enigma)
self.timeseries[self.cur_attempts] = self.cur_observed # NEW
if outcome == self.enigma.PASS:
break
return trigram
def break_max_attempts(self, max_attempts):
"""Returns #messages successfully cracked after a given #attempts."""
cur_msg = 0
n_messages = 0
while True:
trigram = self.break_message(cur_msg)
# stop when reaching max_attempts
if self.cur_attempts >= max_attempts:
break
# update observed trigrams
n_messages += 1
self.observed[trigram] += 1
if (self.observed[trigram] == 1):
self.cur_observed += 1
self.timeseries[self.cur_attempts] = self.cur_observed
cur_msg += 1
return n_messages
```
`original` is the number of messages cracked by the bruteforcing strategy, given 100k attempts. Can we beat this?
```
max_attempts = 100000
bletchley = BletchleyPark(enigma)
original = bletchley.break_max_attempts(max_attempts)
original
```
Now, we'll create a boosting strategy by trying trigrams first that we have previously observed most often.
```
class BoostedBletchleyPark(BletchleyPark):
def break_message(self, message):
"""Returns the trigram for an encoded message, and
track #trigrams observed as #attempts increases."""
self.enigma.cur_msg = message
# boost cracking by trying observed trigrams first
for trigram in sorted(self.prior, key=self.prior.get, reverse=True):
self.cur_attempts += 1
(_, outcome) = self.enigma.run(trigram)
self.timeseries[self.cur_attempts] = self.cur_observed
if outcome == self.enigma.PASS:
return trigram
# else fall back to normal cracking
return super().break_message(message)
```
`boosted` is the number of messages cracked by the boosted strategy.
```
boostedBletchley = BoostedBletchleyPark(enigma)
boostedBletchley.prior = observed
boosted = boostedBletchley.break_max_attempts(max_attempts)
boosted
```
We see that the boosted technique cracks substantially more messages. It is worthwhile to record how often each trigram is being used as key and try them in the order of their occurence.
***Try it***. *For practical reasons, we use a large number of previous observations as prior (`boostedBletchley.prior = observed`). You can try to change the code such that the strategy uses the trigram frequencies (`self.observed`) observed **during** the campaign itself to boost the campaign. You will need to increase `max_attempts` and wait for a long while.*
Let's compare the number of trigrams discovered over time.
```
# print plots
line_old, = plt.plot(bletchley.timeseries, label="Bruteforce Strategy")
line_new, = plt.plot(boostedBletchley.timeseries, label="Boosted Strategy")
plt.legend(handles=[line_old, line_new])
plt.xlabel('# of cracking attempts')
plt.ylabel('# of trigrams discovered')
plt.title('Trigram Discovery Over Time');
```
We see that the boosted fuzzer is constantly superior over the random fuzzer.
## Estimating the Probability of Path Discovery
<!-- ## Residual Risk: Probability of Failure after an Unsuccessful Fuzzing Campaign -->
<!-- Residual risk is not formally defined in this section, so I made the title a bit more generic -- AZ -->
So, what does Turing's observation for the Naval Enigma have to do with fuzzing _arbitrary_ programs? Turing's assistant I.J. Good extended and published Turing's work on the estimation procedures in Biometrica, a journal for theoretical biostatistics that still exists today. Good did not talk about trigrams. Instead, he calls them "species". Hence, the GT estimator is presented to estimate how likely it is to discover a new species, given an existing sample of individuals (each of which belongs to exactly one species).
Now, we can associate program inputs to species, as well. For instance, we could define the path that is exercised by an input as that input's species. This would allow us to _estimate the probability that fuzzing discovers a new path._ Later, we will see how this discovery probability estimate also estimates the likelihood of discovering a vulnerability when we have not seen one, yet (residual risk).
Let's do this. We identify the species for an input by computing a hash-id over the set of statements exercised by that input. In the [Coverage](Coverage.ipynb) chapter, we have learned about the [Coverage class](Coverage.ipynb#A-Coverage-Class) which collects coverage information for an executed Python function. As an example, the function [`cgi_decode()`](Coverage.ipynb#A-CGI-Decoder) was introduced. The function `cgi_decode()` takes a string encoded for a website URL and decodes it back to its original form.
Here's what `cgi_decode()` does and how coverage is computed.
```
from Coverage import Coverage, cgi_decode
encoded = "Hello%2c+world%21"
with Coverage() as cov:
decoded = cgi_decode(encoded)
decoded
print(cov.coverage());
```
### Trace Coverage
First, we will introduce the concept of execution traces, which are a coarse abstraction of the execution path taken by an input. Compared to the definition of path, a trace ignores the sequence in which statements are exercised or how often each statement is exercised.
* `pickle.dumps()` - serializes an object by producing a byte array from all the information in the object
* `hashlib.md5()` - produces a 128-bit hash value from a byte array
```
import pickle
import hashlib
def getTraceHash(cov):
pickledCov = pickle.dumps(cov.coverage())
hashedCov = hashlib.md5(pickledCov).hexdigest()
return hashedCov
```
Remember our model for the Naval Enigma machine? Each message must be decrypted using exactly one trigram while multiple messages may be decrypted by the same trigram. Similarly, we need each input to yield exactly one trace hash while multiple inputs can yield the same trace hash.
Let's see whether this is true for our `getTraceHash()` function.
```
inp1 = "a+b"
inp2 = "a+b+c"
inp3 = "abc"
with Coverage() as cov1:
cgi_decode(inp1)
with Coverage() as cov2:
cgi_decode(inp2)
with Coverage() as cov3:
cgi_decode(inp3)
```
The inputs `inp1` and `inp2` execute the same statements:
```
inp1, inp2
cov1.coverage() - cov2.coverage()
```
The difference between both coverage sets is empty. Hence, the trace hashes should be the same:
```
getTraceHash(cov1)
getTraceHash(cov2)
assert getTraceHash(cov1) == getTraceHash(cov2)
```
In contrast, the inputs `inp1` and `inp3` execute _different_ statements:
```
inp1, inp3
cov1.coverage() - cov3.coverage()
```
Hence, the trace hashes should be different, too:
```
getTraceHash(cov1)
getTraceHash(cov3)
assert getTraceHash(cov1) != getTraceHash(cov3)
```
### Measuring Trace Coverage over Time
In order to measure trace coverage for a `function` executing a `population` of fuzz inputs, we slightly adapt the `population_coverage()` function from the [Chapter on Coverage](Coverage.ipynb#Coverage-of-Basic-Fuzzing).
```
def population_trace_coverage(population, function):
cumulative_coverage = []
all_coverage = set()
cumulative_singletons = []
cumulative_doubletons = []
singletons = set()
doubletons = set()
for s in population:
with Coverage() as cov:
try:
function(s)
except BaseException:
pass
cur_coverage = set([getTraceHash(cov)])
# singletons and doubletons -- we will need them later
doubletons -= cur_coverage
doubletons |= singletons & cur_coverage
singletons -= cur_coverage
singletons |= cur_coverage - (cur_coverage & all_coverage)
cumulative_singletons.append(len(singletons))
cumulative_doubletons.append(len(doubletons))
# all and cumulative coverage
all_coverage |= cur_coverage
cumulative_coverage.append(len(all_coverage))
return all_coverage, cumulative_coverage, cumulative_singletons, cumulative_doubletons
```
Let's see whether our new function really contains coverage information only for *two* traces given our three inputs for `cgi_decode`.
```
all_coverage = population_trace_coverage([inp1, inp2, inp3], cgi_decode)[0]
assert len(all_coverage) == 2
```
Unfortunately, the `cgi_decode()` function is too simple. Instead, we will use the original Python [HTMLParser](https://docs.python.org/3/library/html.parser.html) as our test subject.
```
from Fuzzer import RandomFuzzer
from Coverage import population_coverage
from html.parser import HTMLParser
trials = 50000 # number of random inputs generated
```
Let's run a random fuzzer for $n=50000$ times and plot trace coverage over time.
```
# create wrapper function
def my_parser(inp):
parser = HTMLParser() # resets the HTMLParser object for every fuzz input
parser.feed(inp)
# create random fuzzer
fuzzer = RandomFuzzer(min_length=1, max_length=100,
char_start=32, char_range=94)
# create population of fuzz inputs
population = []
for i in range(trials):
population.append(fuzzer.fuzz())
# execute and measure trace coverage
trace_timeseries = population_trace_coverage(population, my_parser)[1]
# execute and measure code coverage
code_timeseries = population_coverage(population, my_parser)[1]
# plot trace coverage over time
plt.figure(num=None, figsize=(12, 4), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 2, 1)
plt.plot(trace_timeseries)
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised')
plt.title('Trace Coverage Over Time')
# plot code coverage over time
plt.subplot(1, 2, 2)
plt.plot(code_timeseries)
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements covered')
plt.title('Code Coverage Over Time');
```
Above, we can see trace coverage (left) and code coverage (right) over time. Here are our observations.
1. **Trace coverage is more robust**. There are less sudden jumps in the graph compared to code coverage.
2. **Trace coverage is more fine grained.** There more traces than statements covered at the end (y-axis)
3. **Trace coverage grows more steadily**. Code coverage exercise more than half the statements with the first input that it exercises after 50k inputs. Instead, the number of traces covered grows slowly and steadily since each input can yield only one execution trace.
It is for this reason that one of the most prominent and successful fuzzers today, american fuzzy lop (AFL), uses a similar *measure of progress* (a hash computed over the branches exercised by the input).
### Evaluating the Discovery Probability Estimate
Let's find out how the Good-Turing estimator performs as estimate of discovery probability when we are fuzzing to discover execution traces rather than trigrams.
To measure the empirical probability, we execute the same population of inputs (n=50000) and measure in regular intervals (measurement=100 intervals). During each measurement, we repeat the following experiment repeats=500 times, reporting the average: If the next input yields a new trace, return 1, otherwise return 0. Note that during these repetitions, we do not record the newly discovered traces as observed.
```
repeats = 500 # experiment repetitions
measurements = 100 # experiment measurements
emp_timeseries = []
all_coverage = set()
step = int(trials / measurements)
for i in range(0, trials, step):
if i - step >= 0:
for j in range(step):
inp = population[i - j]
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
all_coverage |= set([getTraceHash(cov)])
discoveries = 0
for _ in range(repeats):
inp = fuzzer.fuzz()
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
if getTraceHash(cov) not in all_coverage:
discoveries += 1
emp_timeseries.append(discoveries / repeats)
```
Now, we compute the Good-Turing estimate over time.
```
gt_timeseries = []
singleton_timeseries = population_trace_coverage(population, my_parser)[2]
for i in range(1, trials + 1, step):
gt_timeseries.append(singleton_timeseries[i - 1] / i)
```
Let's go ahead and plot both time series.
```
line_emp, = plt.semilogy(emp_timeseries, label="Empirical")
line_gt, = plt.semilogy(gt_timeseries, label="Good-Turing")
plt.legend(handles=[line_emp, line_gt])
plt.xticks(range(0, measurements + 1, int(measurements / 5)),
range(0, trials + 1, int(trials / 5)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('discovery probability')
plt.title('Discovery Probability Over Time');
```
Again, the Good-Turing estimate appears to be *highly accurate*. In fact, the empirical estimator has a much lower precision as indicated by the large swings. You can try and increase the number of repetitions (repeats) to get more precision for the empirical estimates, however, at the cost of waiting much longer.
### Discovery Probability Quantifies Residual Risk
Alright. You have gotten a hold of a couple of powerful machines and used them to fuzz a software system for several months without finding any vulnerabilities. Is the system vulnerable?
Well, who knows? We cannot say for sure; there is always some residual risk. Testing is not verification. Maybe the next test input that is generated reveals a vulnerability.
Let's say *residual risk* is the probability that the next test input reveals a vulnerability that has not been found, yet. Böhme \cite{stads} has shown that the Good-Turing estimate of the discovery probability is also an estimate of the maxmimum residual risk.
**Proof sketch (Residual Risk)**. Here is a proof sketch that shows that an estimator of discovery probability for an arbitrary definition of species gives an upper bound on the probability to discover a vulnerability when none has been found: Suppose, for each "old" species A (here, execution trace), we derive two "new" species: Some inputs belonging to A expose a vulnerability while others belonging to A do not. We know that _only_ species that do not expose a vulnerability have been discovered. Hence, _all_ species exposing a vulnerability and _some_ species that do not expose a vulnerability remain undiscovered. Hence, the probability to discover a new species gives an upper bound on the probability to discover (a species that exposes) a vulnerability. **QED**.
An estimate of the discovery probability is useful in many other ways.
1. **Discovery probability**. We can estimate, at any point during the fuzzing campaign, the probability that the next input belongs to a previously unseen species (here, that it yields a new execution trace, i.e., exercises a new set of statements).
2. **Complement of discovery probability**. We can estimate the proportion of *all* inputs the fuzzer can generate for which we have already seen the species (here, execution traces). In some sense, this allows us to quantify the *progress of the fuzzing campaign towards completion*: If the probability to discovery a new species is too low, we might as well abort the campaign.
3. **Inverse of discovery probability**. We can predict the number of test inputs needed, so that we can expect the discovery of a new species (here, execution trace).
## How Do We Know When to Stop Fuzzing?
In fuzzing, we have measures of progress such as [code coverage](Coverage.ipynb) or [grammar coverage](GrammarCoverageFuzzer.ipynb). Suppose, we are interested in covering all statements in the program. The _percentage_ of statements that have already been covered quantifies how "far" we are from completing the fuzzing campaign. However, sometimes we know only the _number_ of species $S(n)$ (here, statements) that have been discovered after generating $n$ fuzz inputs. The percentage $S(n)/S$ can only be computed if we know the _total number_ of species $S$. Even then, not all species may be feasible.
### A Success Estimator
If we do not _know_ the total number of species, then let's at least _estimate_ it: As we have seen before, species discovery slows down over time. In the beginning, many new species are discovered. Later, many inputs need to be generated before discovering the next species. In fact, given enough time, the fuzzing campaign approaches an _asymptote_. It is this asymptote that we can estimate.
In 1984, Anne Chao, a well-known theoretical bio-statistician, has developed an estimator $\hat S$ which estimates the asymptotic total number of species $S$:
\begin{align}
\hat S_\text{Chao1} = \begin{cases}
S(n) + \frac{f_1^2}{2f_2} & \text{if $f_2>0$}\\
S(n) + \frac{f_1(f_1-1)}{2} & \text{otherwise}
\end{cases}
\end{align}
* where $f_1$ and $f_2$ is the number of singleton and doubleton species, respectively (that have been observed exactly once or twice, resp.), and
* where $S(n)$ is the number of species that have been discovered after generating $n$ fuzz inputs.
So, how does Chao's estimate perform? To investigate this, we generate trials=400000 fuzz inputs using a fuzzer setting that allows us to see an asymptote in a few seconds. We measure trace coverage coverage. After half-way into our fuzzing campaign (trials/2=100000), we generate Chao's estimate $\hat S$ of the asymptotic total number of species. Then, we run the remainer of the campaign to see the "empirical" asymptote.
```
trials = 400000
fuzzer = RandomFuzzer(min_length=2, max_length=4,
char_start=32, char_range=32)
population = []
for i in range(trials):
population.append(fuzzer.fuzz())
_, trace_ts, f1_ts, f2_ts = population_trace_coverage(population, my_parser)
time = int(trials / 2)
time
f1 = f1_ts[time]
f2 = f2_ts[time]
Sn = trace_ts[time]
if f2 > 0:
hat_S = Sn + f1 * f1 / (2 * f2)
else:
hat_S = Sn + f1 * (f1 - 1) / 2
```
After executing `time` fuzz inputs (half of all), we have covered these many traces:
```
time
Sn
```
We can estimate there are this many traces in total:
```
hat_S
```
Hence, we have achieved this percentage of the estimate:
```
100 * Sn / hat_S
```
After executing `trials` fuzz inputs, we have covered these many traces:
```
trials
trace_ts[trials - 1]
```
The accuracy of Chao's estimator is quite reasonable. It isn't always accurate -- particularly at the beginning of a fuzzing campaign when the [discovery probability](WhenIsEnough.ipynb#Measuring-Trace-Coverage-over-Time) is still very high. Nevertheless, it demonstrates the main benefit of reporting a percentage to assess the progress of a fuzzing campaign towards completion.
***Try it***. *Try setting and `trials` to 1 million and `time` to `int(trials / 4)`.*
### Extrapolating Fuzzing Success
<!-- ## Cost-Benefit Analysis: Extrapolating the Number of Species Discovered -->
Suppose you have run the fuzzer for a week, which generated $n$ fuzz inputs and discovered $S(n)$ species (here, covered $S(n)$ execution traces). Instead, of running the fuzzer for another week, you would like to *predict* how many more species you would discover. In 2003, Anne Chao and her team developed an extrapolation methodology to do just that. We are interested in the number $S(n+m^*)$ of species discovered if $m^*$ more fuzz inputs were generated:
\begin{align}
\hat S(n + m^*) = S(n) + \hat f_0 \left[1-\left(1-\frac{f_1}{n\hat f_0 + f_1}\right)^{m^*}\right]
\end{align}
* where $\hat f_0=\hat S - S(n)$ is an estimate of the number $f_0$ of undiscovered species, and
* where $f_1$ the number of singleton species, i.e., those we have observed exactly once.
The number $f_1$ of singletons, we can just keep track of during the fuzzing campaign itself. The estimate of the number $\hat f_0$ of undiscovered species, we can simply derive using Chao's estimate $\hat S$ and the number of observed species $S(n)$.
Let's see how Chao's extrapolator performs by comparing the predicted number of species to the empirical number of species.
```
prediction_ts = [None] * time
f0 = hat_S - Sn
for m in range(trials - time):
assert (time * f0 + f1) != 0 , 'time:%s f0:%s f1:%s' % (time, f0,f1)
prediction_ts.append(Sn + f0 * (1 - (1 - f1 / (time * f0 + f1)) ** m))
plt.figure(num=None, figsize=(12, 3), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 3, 1)
plt.plot(trace_ts, color='white')
plt.plot(trace_ts[:time])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised')
plt.subplot(1, 3, 2)
line_cur, = plt.plot(trace_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised')
plt.subplot(1, 3, 3)
line_emp, = plt.plot(trace_ts, color='grey', label="Actual progress")
line_cur, = plt.plot(trace_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_emp, line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of traces exercised');
```
The prediction from Chao's extrapolator looks quite accurate. We make a prediction at $time=trials/4$. Despite an extrapolation by 3 times (i.e., at trials), we can see that the predicted value (black, dashed line) closely matches the empirical value (grey, solid line).
***Try it***. Again, try setting and `trials` to 1 million and `time` to `int(trials / 4)`.
## Lessons Learned
* One can measure the _progress_ of a fuzzing campaign (as species over time, i.e., $S(n)$).
* One can measure the _effectiveness_ of a fuzzing campaign (as asymptotic total number of species $S$).
* One can estimate the _effectiveness_ of a fuzzing campaign using the Chao1-estimator $\hat S$.
* One can extrapolate the _progress_ of a fuzzing campaign, $\hat S(n+m^*)$.
* One can estimate the _residual risk_ (i.e., the probability that a bug exists that has not been found) using the Good-Turing estimator $GT$ of the species discovery probability.
## Next Steps
This chapter is the last in the book! If you want to continue reading, have a look at the [Appendices](99_Appendices.ipynb). Otherwise, _make use of what you have learned and go and create great fuzzers and test generators!_
## Background
* A **statistical framework for fuzzing**, inspired from ecology. Marcel Böhme. [STADS: Software Testing as Species Discovery](https://mboehme.github.io/paper/TOSEM18.pdf). ACM TOSEM 27(2):1--52
* Estimating the **discovery probability**: I.J. Good. 1953. [The population frequencies of species and the
estimation of population parameters](https://www.jstor.org/stable/2333344). Biometrika 40:237–264.
* Estimating the **asymptotic total number of species** when each input can belong to exactly one species: Anne Chao. 1984. [Nonparametric estimation of the number of classes in a population](https://www.jstor.org/stable/4615964). Scandinavian Journal of Statistics 11:265–270
* Estimating the **asymptotic total number of species** when each input can belong to one or more species: Anne Chao. 1987. [Estimating the population size for capture-recapture data with unequal catchability](https://www.jstor.org/stable/2531532). Biometrics 43:783–791
* **Extrapolating** the number of discovered species: Tsung-Jen Shen, Anne Chao, and Chih-Feng Lin. 2003. [Predicting the Number of New Species in Further Taxonomic Sampling](http://chao.stat.nthu.edu.tw/wordpress/paper/2003_Ecology_84_P798.pdf). Ecology 84, 3 (2003), 798–804.
## Exercises
I.J. Good and Alan Turing developed an estimator for the case where each input belongs to exactly one species. For instance, each input yields exactly one execution trace (see function [`getTraceHash`](#Trace-Coverage)). However, this is not true in general. For instance, each input exercises multiple statements and branches in the source code. Generally, each input can belong to one *or more* species.
In this extended model, the underlying statistics are quite different. Yet, all estimators that we have discussed in this chapter turn out to be almost identical to those for the simple, single-species model. For instance, the Good-Turing estimator $C$ is defined as
$$C=\frac{Q_1}{n}$$
where $Q_1$ is the number of singleton species and $n$ is the number of generated test cases.
Throughout the fuzzing campaign, we record for each species the *incidence frequency*, i.e., the number of inputs that belong to that species. Again, we define a species $i$ as *singleton species* if we have seen exactly one input that belongs to species $i$.
### Exercise 1: Estimate and Evaluate the Discovery Probability for Statement Coverage
In this exercise, we create a Good-Turing estimator for the simple fuzzer.
#### Part 1: Population Coverage
Implement a function `population_stmt_coverage()` as in [the section on estimating discovery probability](#Estimating-the-Discovery-Probability) that monitors the number of singletons and doubletons over time, i.e., as the number $i$ of test inputs increases.
```
from Coverage import population_coverage, Coverage
...
```
**Solution.** Here we go:
```
def population_stmt_coverage(population, function):
cumulative_coverage = []
all_coverage = set()
cumulative_singletons = []
cumulative_doubletons = []
singletons = set()
doubletons = set()
for s in population:
with Coverage() as cov:
try:
function(s)
except BaseException:
pass
cur_coverage = cov.coverage()
# singletons and doubletons
doubletons -= cur_coverage
doubletons |= singletons & cur_coverage
singletons -= cur_coverage
singletons |= cur_coverage - (cur_coverage & all_coverage)
cumulative_singletons.append(len(singletons))
cumulative_doubletons.append(len(doubletons))
# all and cumulative coverage
all_coverage |= cur_coverage
cumulative_coverage.append(len(all_coverage))
return all_coverage, cumulative_coverage, cumulative_singletons, cumulative_doubletons
```
#### Part 2: Population
Use the random `fuzzer(min_length=1, max_length=1000, char_start=0, char_range=255)` from [the chapter on Fuzzers](Fuzzer.ipynb) to generate a population of $n=10000$ fuzz inputs.
```
from Fuzzer import RandomFuzzer
from html.parser import HTMLParser
...
```
**Solution.** This is fairly straightforward:
```
trials = 2000 # increase to 10000 for better convergences. Will take a while..
```
We create a wrapper function...
```
def my_parser(inp):
parser = HTMLParser() # resets the HTMLParser object for every fuzz input
parser.feed(inp)
```
... and a random fuzzer:
```
fuzzer = RandomFuzzer(min_length=1, max_length=1000,
char_start=0, char_range=255)
```
We fill the population:
```
population = []
for i in range(trials):
population.append(fuzzer.fuzz())
```
#### Part 3: Estimating Probabilities
Execute the generated inputs on the Python HTML parser (`from html.parser import HTMLParser`) and estimate the probability that the next input covers a previously uncovered statement (i.e., the discovery probability) using the Good-Turing estimator.
**Solution.** Here we go:
```
measurements = 100 # experiment measurements
step = int(trials / measurements)
gt_timeseries = []
singleton_timeseries = population_stmt_coverage(population, my_parser)[2]
for i in range(1, trials + 1, step):
gt_timeseries.append(singleton_timeseries[i - 1] / i)
```
#### Part 4: Empirical Evaluation
Empirically evaluate the accuracy of the Good-Turing estimator (using $10000$ repetitions) of the probability to cover new statements using the experimental procedure at the end of [the section on estimating discovery probability](#Estimating-the-Discovery-Probability).
**Solution.** This is as above:
```
# increase to 10000 for better precision (less variance). Will take a while..
repeats = 100
emp_timeseries = []
all_coverage = set()
for i in range(0, trials, step):
if i - step >= 0:
for j in range(step):
inp = population[i - j]
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
all_coverage |= cov.coverage()
discoveries = 0
for _ in range(repeats):
inp = fuzzer.fuzz()
with Coverage() as cov:
try:
my_parser(inp)
except BaseException:
pass
# If intersection not empty, a new stmt was (dis)covered
if cov.coverage() - all_coverage:
discoveries += 1
emp_timeseries.append(discoveries / repeats)
%matplotlib inline
import matplotlib.pyplot as plt
line_emp, = plt.semilogy(emp_timeseries, label="Empirical")
line_gt, = plt.semilogy(gt_timeseries, label="Good-Turing")
plt.legend(handles=[line_emp, line_gt])
plt.xticks(range(0, measurements + 1, int(measurements / 5)),
range(0, trials + 1, int(trials / 5)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('discovery probability')
plt.title('Discovery Probability Over Time');
```
### Exercise 2: Extrapolate and Evaluate Statement Coverage
In this exercise, we use Chao's extrapolation method to estimate the success of fuzzing.
#### Part 1: Create Population
Use the random `fuzzer(min_length=1, max_length=1000, char_start=0, char_range=255)` to generate a population of $n=400000$ fuzz inputs.
**Solution.** Here we go:
```
trials = 400 # Use 400000 for actual solution. This takes a while!
population = []
for i in range(trials):
population.append(fuzzer.fuzz())
_, stmt_ts, Q1_ts, Q2_ts = population_stmt_coverage(population, my_parser)
```
#### Part 2: Compute Estimate
Compute an estimate of the total number of statements $\hat S$ after $n/4=100000$ fuzz inputs were generated. In the extended model, $\hat S$ is computed as
\begin{align}
\hat S_\text{Chao1} = \begin{cases}
S(n) + \frac{Q_1^2}{2Q_2} & \text{if $Q_2>0$}\\
S(n) + \frac{Q_1(Q_1-1)}{2} & \text{otherwise}
\end{cases}
\end{align}
* where $Q_1$ and $Q_2$ is the number of singleton and doubleton statements, respectively (i.e., statements that have been exercised by exactly one or two fuzz inputs, resp.), and
* where $S(n)$ is the number of statements that have been (dis)covered after generating $n$ fuzz inputs.
**Solution.** Here we go:
```
time = int(trials / 4)
Q1 = Q1_ts[time]
Q2 = Q2_ts[time]
Sn = stmt_ts[time]
if Q2 > 0:
hat_S = Sn + Q1 * Q1 / (2 * Q2)
else:
hat_S = Sn + Q1 * (Q1 - 1) / 2
print("After executing %d fuzz inputs, we have covered %d **(%.1f %%)** statements.\n" % (time, Sn, 100 * Sn / hat_S) +
"After executing %d fuzz inputs, we estimate there are %d statements in total.\n" % (time, hat_S) +
"After executing %d fuzz inputs, we have covered %d statements." % (trials, stmt_ts[trials - 1]))
```
#### Part 3: Compute and Evaluate Extrapolator
Compute and evaluate Chao's extrapolator by comparing the predicted number of statements to the empirical number of statements.
**Solution.** Here's our solution:
```
prediction_ts = [None] * time
Q0 = hat_S - Sn
for m in range(trials - time):
prediction_ts.append(Sn + Q0 * (1 - (1 - Q1 / (time * Q0 + Q1)) ** m))
plt.figure(num=None, figsize=(12, 3), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(1, 3, 1)
plt.plot(stmt_ts, color='white')
plt.plot(stmt_ts[:time])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements exercised')
plt.subplot(1, 3, 2)
line_cur, = plt.plot(stmt_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements exercised')
plt.subplot(1, 3, 3)
line_emp, = plt.plot(stmt_ts, color='grey', label="Actual progress")
line_cur, = plt.plot(stmt_ts[:time], label="Ongoing fuzzing campaign")
line_pred, = plt.plot(prediction_ts, linestyle='--',
color='black', label="Predicted progress")
plt.legend(handles=[line_emp, line_cur, line_pred])
plt.xticks(range(0, trials + 1, int(time)))
plt.xlabel('# of fuzz inputs')
plt.ylabel('# of statements exercised');
```
|
github_jupyter
|
# Introduction to 2D plots
This notebook demonstrates how plot some latitude by longitude maps of some key surface variables. Most features are available in the preinstalled `geog0111` environment.
But updated plotting that removes white meridional lines around the Greenwich Meridian, requires the `geog0121` virtual environment. Instructions about how to install this environment (using `conda` and the `environment.yml` file) are provided in the handbook.
### Import packages and define fucntions for calculations
```
'''Import packages for loading data, analysing, and plotting'''
import xarray as xr
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
import cartopy
import cartopy.crs as ccrs
import matplotlib
from netCDF4 import Dataset
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy.ma as ma
import os
import matplotlib.colors as colors
import scipy
from cartopy.util import add_cyclic_point
import matplotlib.ticker as mticker
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
```
# Change in annual temperature under the SSP585 scenario
Here we use the CVDP output files to look at the change in annual mean surface temperature
```
#define filenames and their directories
end_period='2071-2100'
start_period='1851-1900'
ssp='ssp585'
directory_a='/data/aod/cvdp_cmip6/geog0121/UKESM1-0-LL_ssps.wrt_%s' %end_period
filename_a='%s/UKESM1_%s_%s.cvdp_data.1850-2100.nc'%(directory_a,ssp,end_period)
directory_b='/data/aod/cvdp_cmip6/geog0121/UKESM1-0-LL_ssps.wrt_%s' %start_period
filename_b='%s/UKESM1_%s_%s.cvdp_data.1850-2100.nc'%(directory_b,ssp,start_period)
# load files
expt_a_file=xr.open_dataset(filename_a,decode_times=False)
expt_b_file=xr.open_dataset(filename_b,decode_times=False)
# load the coordinates
lat=expt_a_file['lat']
lon=expt_a_file['lon']
# load the variables themselves
variable_name='tas_spatialmean_ann'
expt_a=expt_a_file[variable_name]
expt_b=expt_b_file[variable_name]
# create the difference
diff=expt_a-expt_b
diff
```
### Using xarray's simplest plotting routine
```
diff.plot()
```
Whilst this plot clearly show what is going on. It is missing several useful features:
* A sensible colormap
* The coastline (or country borders) to help orientate you
* A logical scale for the colors to use
Whatever you decide to plot, it is always worth selecting a relevant colormap. All of the easily available colormaps can be seen at [Matplotlib's reference pages](https://matplotlib.org/3.1.1/gallery/color/colormap_reference.html). For this instance, I am picking a sequential one that goes from yellow-orange-red (YlOrRd), with the keyword `cmap=` in the plot call.
Here I am going to plot every 2 degrees above preindustrial, using both the `levels` keyword and a call to `np.linspace()`, which subdivides the range from 0-20 (inclusive) into 11 different levels.
Adding the map is a little trickier. We shall use the Robinson projection, but we need to specify that first along with a load of other map-related options to creates some axes. We then also need to pass these to the plotting routine, as well as telling it us the Plate Carree method to map the locations onto the Robinson projection.
A final thing to note is that we've switched to using `contourf` instead of the default option (actually `pcolormesh`). This is needed for the maps, and will otherwise throw up errors like
> 'GeoAxesSubplot' object has no attribute '_hold'
```
# Define the map projection through an "axes" call
plt.figure(figsize=(10,7)) #make the map itself nice and big
projection = ccrs.Robinson() #specify the Robinson projection
ax = plt.axes(projection=projection) #create the axes
ax.coastlines() # add the coastlines
ax.gridlines() # add some gray gridlines
ax.add_feature(cartopy.feature.BORDERS) #add the country borders
# Now overplot the map onto these axes.
fig=diff.plot.contourf(ax=ax, transform=ccrs.PlateCarree(), \
cmap='YlOrRd', \
levels=np.linspace(0,20,11))
# [note that its given a name of `fig` through the =, so that it can saved later]
```
# UKESM's El Nino temperature pattern
Here we use the CVDP output files to plot the temperature response to an El Nino. First we will load in the data
```
#generate filename
directory='/data/aod/cvdp_cmip6/geog0121/UKESM1-0-LL_historical.vsObs'
filename='%s/UKESM1-0-LL_PresentDay.cvdp_data.1850-2014.nc'%(directory)
# load files
expt_file=xr.open_dataset(filename,decode_times=False)
# load the coordinates
lat=expt_file['lat']
lon=expt_file['lon']
# load the variables themselves
enso_pattern=expt_file.nino34_spacomp_tas_djf1
```
Then we will specify the colorscale and map
```
#temperatures
cmap=plt.get_cmap('bwr') #define colormap
#define colormap's range and scale
cmap_limits=[-5,5]
bounds = np.linspace(cmap_limits[0], cmap_limits[1], 21)
norm = colors.BoundaryNorm(boundaries=bounds, ncolors=256)
```
Now we will make a global plot of the El Nino pattern
```
# Choose the map and projection
projection = ccrs.Robinson()
transform=ccrs.PlateCarree()
# Plot the axes
plt.figure(figsize=(10,7))
ax = plt.axes(projection=projection)
ax.coastlines()
ax.gridlines()
# Make the actual figure
fig=ax.contourf(lon,lat,enso_pattern,levels=bounds, transform=transform,cmap=cmap,norm=norm)
# Alter the color bar for the map
cax,kw = matplotlib.colorbar.make_axes(ax,location='bottom',pad=0.05,shrink=0.7)
plt.colorbar(fig,cax=cax,extend='both',**kw)
```
And then lets zoom into a smaller region of it
```
#Regional map
region=[100,280,-30,30] #[lon_min,lon_max,lat_min,lat_max]
# note the specification of the central longitude, so that is spans the dateline
projection = ccrs.PlateCarree(central_longitude=180., globe=None)
transform=ccrs.PlateCarree()
plt.figure(figsize=(10,7))
ax = plt.axes(projection=projection)
ax.coastlines()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.xlabels_bottom = False
gl.ylabels_left = False
gl.xformatter = LONGITUDE_FORMATTER
fig=ax.contourf(lon,lat,enso_pattern,levels=bounds, transform=transform,cmap=cmap,norm=norm)
ax.set_extent(region, ccrs.PlateCarree())
cax,kw = matplotlib.colorbar.make_axes(ax,location='bottom',pad=0.05,shrink=0.7)
plt.colorbar(fig,cax=cax,extend='both',**kw)
#plt.savefig(figname)
```
# Seasonal precipitation anomalies
The CVDP files can be used to create maps of the changes in seasonal precipitation. First we select the variable
```
#seasonal precipitation anomalies
variable_name='pr_spatialmean_djf'
expt_a=expt_a_file[variable_name]
expt_b=expt_b_file[variable_name]
diff=expt_a-expt_b
```
Then we define the colormap, and give it a non-linear interval
```
#precipitations
cmap=plt.get_cmap('BrBG') #define colormap
#define colormap's range and scale
bounds = [-5,-2,-1,-0.8,-0.6,-0.4,-0.2,0,0.2,0.4,0.6,0.8,1,2,5]
norm = colors.BoundaryNorm(boundaries=bounds, ncolors=256)
```
Then we can create a global map with...
```
#Global map
projection = ccrs.Robinson()
transform=ccrs.PlateCarree()
plt.figure(figsize=(10,7))
ax = plt.axes(projection=projection)
ax.coastlines()
ax.gridlines()
fig=ax.contourf(lon,lat,diff,levels=bounds, transform=transform,cmap=cmap,norm=norm)
cax,kw = matplotlib.colorbar.make_axes(ax,location='bottom',pad=0.05,shrink=0.7)
plt.colorbar(fig,cax=cax,extend='both',**kw)
```
Or a regional map with...
```
#Regional map
region=[-20,70,20,90] #[lon_min,lon_max,lat_min,lat_max]
projection = ccrs.PlateCarree(central_longitude=0.0, globe=None)
transform=ccrs.PlateCarree()
plt.figure(figsize=(10,7))
ax = plt.axes(projection=projection)
ax.coastlines()
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,alpha=0.5, linestyle='--')
gl.xlabels_top = False
gl.ylabels_left = False
#gl.xlocator = mticker.FixedLocator([-180, -45, 0, 45, 180])
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
fig=ax.contourf(lon,lat,diff,levels=bounds, transform=transform,cmap=cmap,norm=norm)
ax.set_extent(region, ccrs.PlateCarree())
cax,kw = matplotlib.colorbar.make_axes(ax,location='bottom',pad=0.05,shrink=0.7)
plt.colorbar(fig,cax=cax,extend='both',**kw)
```
If your regional plot (like this one) happens to cross the Greenwich meridian, then you will end up with a white line going straight up the middle of your regional plot. This can be fixed by adding a "cyclic point" to loop the data around the globe. To understand this, think about how you need to overlap the wrapping paper on a present to cover it completely.
There is a function in python to do this, but unfortunately it doesn't come in the standard version of python. This function is in the cell below.
You will need to make your own virtual environment called `geog0121` using conda and the yml file provided. If you have not down this, then when you run the code below it will fail with the following error message...
> TypeError: invalid indexer array
```
diff, lon = add_cyclic_point(diff, coord=lon)
```
|
github_jupyter
|
```
with open('input.txt') as t:
text = t.read().splitlines()
# vloer toevoegen aan alle randen
l_rand =len(text[0])
rand = '.' * l_rand
[text.insert(x, rand) for x in range(0,l_rand)]
[text.insert(len(text), rand) for _ in range(l_rand)]
wachtruimte = [rand + rij + rand for rij in text]
```
**Deel 1**
```
# def checkstoelen(wr, count):
# count += 1
# nwr = []
# o = 0
# for i,v in enumerate(wr):
# nrij = ''
# if (i < 5) | (i > (len(wr)-5)):
# nrij = v
# else:
# for j,s in enumerate(v):
# if (j < 5) | (j > (len(v)-5)):
# nrij += s
# else:
# ad = [wr[i-1][j-1], wr[i-1][j], wr[i-1][j+1],
# wr[i][j-1], wr[i][j+1],
# wr[i+1][j-1], wr[i+1][j], wr[i+1][j+1]
# ]
# if s == 'L':
# if sum(x != '#' for x in ad) == 8:
# nrij += '#'
# o += 1
# else:
# nrij += s
# elif s == '#':
# if sum(x == s for x in ad) >= 4:
# nrij += 'L'
# else:
# nrij += s
# o += 1
# else:
# nrij += s
# nwr.append(nrij)
# veranderd = set(wr) != set(nwr)
# return(nwr, veranderd, count, o)
# check = wachtruimte
# v = True
# i = 0
# while v == True:
# check, v, i, bezet = checkstoelen(check, i)
# print(bezet)
```
**Deel 2**
```
def checkaangrenzend(d, m, n, l):
ll =[]
r = list(range(1,l))
for x in r:
cellen = [d[m-x][n-x], d[m-x][n], d[m-x][n+x], d[m][n-x], d[m][n+x], d[m+x][n-x], d[m+x][n], d[m+x][n+x]]
ll += [cellen]
ll_list = [[row[z] for row in ll] for z in range(len(cellen))]
ad = []
for richting in ll_list:
maxr = len(richting)
if 'L' in richting:
maxr = richting.index('L')
if '#' in richting[:maxr]:
ad.append(True)
else:
ad.append(False)
return(ad)
def checkstoelen2(wr, count, l):
count += 1
nwr = []
o = 0
for i,v in enumerate(wr):
nrij = ''
if (i < l) | (i >= (len(wr)-l)):
nrij = v
else:
for j,s in enumerate(v):
if (j < l) | (j >= (len(v)-l)):
nrij += s
else:
omgeving = checkaangrenzend(wr, i, j, l)
if s == 'L':
if sum(omgeving) == 0:
nrij += '#'
o += 1
else:
nrij += s
elif s == '#':
if sum(omgeving) >= 5:
nrij += 'L'
else:
nrij += s
o += 1
else:
nrij += s
nwr.append(nrij)
veranderd = set(wr) != set(nwr)
return(nwr, veranderd, count, o)
check = wachtruimte
v = True
i = 0
checks = [wachtruimte]
while v == True:
check, v, i, bezet = checkstoelen2(check, i, l_rand)
print(bezet)
```
|
github_jupyter
|
# 1. 다변수 가우시안 정규분포MVN
$$\mathcal{N}(x ; \mu, \Sigma) = \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right)$$
- $\Sigma$ : 공분산 행렬, positive semidefinite
- x : 확률변수 벡터 $$x = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_M \end{bmatrix}
$$
eg.
$\mu = \begin{bmatrix}2 \\ 3 \end{bmatrix}$,
$\Sigma = \begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}$
```
%matplotlib inline
mu = [2, 3]
cov = [1, 0], [0, 1]
rv = sp.stats.multivariate_normal(mu, cov)
xx = np.linspace(-1, 6, 120)
yy = np.linspace(-1, 6, 150)
XX, YY = np.meshgrid(xx, yy)
plt.contour(XX, YY, rv.pdf(np.dstack([XX, YY])))
plt.axis("equal")
plt.xlim(0, 4)
plt.ylim(0.5, 5.2)
```
eg.
$\mu = \begin{bmatrix}2 \\ 3 \end{bmatrix}$,
$\Sigma = \begin{bmatrix}2 & 3 \\ 3 & 7 \end{bmatrix}$
```
mu = [2, 3]
cov = [2, 3], [3, 7]
rv = sp.stats.multivariate_normal(mu, cov)
xx = np.linspace(-1, 6, 120)
yy = np.linspace(-1, 6, 150)
XX, YY = np.meshgrid(xx, yy)
plt.contour(XX, YY, rv.pdf(np.dstack([XX, YY])))
plt.axis("equal")
plt.show()
```
# 2. 가우시안 정규 분포와 고유값 분해
- 공분산 행렬 $\Sigma$은 대칭행렬이므로, 대각화 가능
$$ \Sigma^{-1} = V \Lambda^{-1}V^T$$
- 따라서
$$
\begin{eqnarray}
\mathcal{N}(x)
&\propto& \exp \left( -\dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x- \mu) \right) \\
&=& \exp \left( -\dfrac{1}{2}(x-\mu)^T V \Lambda^{-1} V^T (x- \mu) \right) \\
&=& \exp \left( -\dfrac{1}{2} x'^T \Lambda^{-1} x' \right) \\
\end{eqnarray}
$$
- V : $\Sigma$의 eigen vector
- 새로운 확률변수$x' = V^{-1}(x-\mu)$
- Cov[x']: $\Sigma$의 matrix of eigenvalues $\Lambda$
- x' 의미
- $\mu$만큼 평행이동 후 eigen vectors를 basis vector로 하는 변환
- 변수간 상관관관계가 소거
- 활용: PCA, 상관관계 높은 변수를 $x_1',x_2'$로 변환
```
mu = [2, 3]
cov = [[4, 3], [3, 5]]
w, v = np.linalg.eig(cov)
print('eigen value: w', w, 'eigen vector: v', v, sep = '\n')
w_cov = [[1.45861873, 0], [0, 7.54138127]]
xx = np.linspace(-1, 5, 120)
yy = np.linspace(0, 6, 150)
XX, YY = np. meshgrid(xx, yy)
plt. figure(figsize=(8, 4))
d = dict(facecolor='k', edgecolor='k')
plt.subplot(121)
rv1 = sp.stats.multivariate_normal(mu, cov)
plt.contour(XX, YY, rv1.pdf(np.dstack([XX,YY])))
plt.annotate("", xy=(mu + 0.35 * w[0] * v[:, 0]), xytext=mu, arrowprops=d)
plt.annotate("", xy=(mu + 0.35 * w[1] * v[:, 1]), xytext=mu, arrowprops=d)
plt.title("$X_1$,$X_2$ Joint pdf")
plt.axis('equal')
#Cov(x)의 eigen vector(v)에 대한 좌표변환
#Cov(x') = Cov(x)의 matrix of eigen values(w_cov)
plt.subplot(122)
rv2 = sp.stats.multivariate_normal(mu, w_cov) #Cov(x)의 좌표변환
plt.contour(XX, YY, rv2.pdf(np.dstack([XX,YY])))
plt.annotate("", xy=(mu + 0.35 * w[0] * np.array([1, 0])), xytext=mu, arrowprops=d)
plt.annotate("", xy=(mu + 0.35 * w[1] * np.array([0, 1])), xytext=mu, arrowprops=d)
plt.title("$X'_1$,$X'_2$ Joint pdf")
plt.axis('equal')
plt.show()
```
# 3. 다변수 가우시안 정규분포의 조건부 확률분포
- M차원에서 N차원이 관측되더라도, 남은 M-N개 확률변수들의 조건부 분포는, 가우시안 정규분포를 띤다.
# 4. 다변수 가우시안 정규분포의 주변 확률분포
- 마찬가지로 가우시안 정규분포를 띤다.
$$\int p(x_1, x_2) dx_2 = \mathcal{N}(x_1; \mu''1, \sigma''^2_1)$$
|
github_jupyter
|
# First Graph Convolutional Neural Network
This notebook shows a simple GCN learning using the KrasHras dataset from [Zamora-Resendiz and Crivelli, 2019](https://www.biorxiv.org/content/10.1101/610444v1.full).
```
import gcn_prot
import torch
import torch.nn.functional as F
from os.path import join, pardir
from random import seed
ROOT_DIR = pardir
seed = 8
```
## Table of contents
1. [Initialize Data](#Initialize-Data)
## Initialize Data
The data for this experiment is the one used for testing on the [CI of the repository](https://github.com/carrascomj/gcn-prot/blob/master/.travis.yml). Thus, it is already fetched.
The first step is to calculate the length of the largest protein (in number of aminoacids), since all the proteins will be zero padded to that value. That way, all the inputs fed to the model will have the same length.
```
largest = gcn_prot.data.get_longest(join(ROOT_DIR, "new_data", "graph"))
print(f"Largets protein has {largest} aminoacids")
```
However, for this particular dataset, it is known from the aforementioned publication that 185 is enough because the 4 terminal aminoacids were not well determined and would be later discarded by the mask.
```
largest = 185
data_path = join(ROOT_DIR, "new_data")
```
The split is performed with 70/10/20 for train/test/valid.
Note that the generated datasets (custom child classes of `torch.utils.data.Dataset`) doesn't stored the graphs in memory but their paths, generating the graph when accessed by an index.
```
train, test, valid = gcn_prot.data.get_datasets(
data_path=data_path,
nb_nodes=largest,
task_type="classification",
nb_classes=2,
split=[0.7, 0.2, 0.1],
seed=42,
)
print(f"Train: {len(train)}\nTest: {len(test)}\nValidation: {len(valid)}")
type(train)
```
## Define the neural network
Each instance in the dataset retrieves a list of four matrices:
1. **feature matrix**: 29 x 185. This corresponds to the aminoacid type (one-hot encoded vector of length 23), residue depth, residue orientation and 4 features encoding the positional index with a sinusoidal transformation.
2. **coordinates**: 3 x 185. x,y,z coordinates of every aminoacid in the crystal (centered).
3. **mask**: to be applied to the adjacency to discard ill-identified aminoacids.
4. **y**: 2 label, Kras/Hras.
The transformation of this list to the input of the neural network (feature matrix, adjacency matrix), is performed during training.
```
model = gcn_prot.models.GCN_simple(
feats=29, # features in feature matrix
hidden=[8, 8], # number of neurons in convolutional layers (3 in this case)
label=2, # features on y
nb_nodes=largest, # for last layer
dropout=0, # applied in the convolutional layers
bias=False, # default
act=F.relu, # default
cuda=True # required for sparsize and fit_network
).cuda()
```
Now, instantiate the criterion and the optimizer.
```
optimizer = torch.optim.Adam(model.parameters())
criterion = torch.nn.CrossEntropyLoss().cuda()
```
## Train the network
```
%matplotlib inline
save_path = join(ROOT_DIR, "models", "GCN_tiny_weigths.pt")
model_na = gcn_prot.models.fit_network(
model, train, test, optimizer, criterion,
batch_size=20, # a lot of batches per epoch
epochs=20,
debug=True, # will print progress of epochs
plot_every=5, # loss plot/epoch
save=save_path # best weights (test set) will be saved here
)
```
Debug with validation.
```
model.eval()
for batch in torch.utils.data.DataLoader(
valid, shuffle=True, batch_size=2, drop_last=False
):
print(gcn_prot.models.train.forward_step(batch, model, False))
```
|
github_jupyter
|
```
import datetime
import pandas as pd
import spacy
import re
import string
import numpy as np
import sys
import seaborn as sns
from matplotlib import cm
from matplotlib.pyplot import figure
import matplotlib.pyplot as plt
%matplotlib inline
from spacy.tokens import Token
from tqdm import tqdm
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
import nltk
from nltk.corpus import stopwords
from sklearn.preprocessing import MinMaxScaler
from sklearn.decomposition import PCA
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.cluster import MiniBatchKMeans
from sklearn.cluster import DBSCAN
from sklearn.ensemble import IsolationForest
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import silhouette_score
from sklearn import metrics
from sklearn.model_selection import ShuffleSplit
import gensim
from gensim import corpora, models
from gensim.models.phrases import Phrases, Phraser
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
from gensim.models.coherencemodel import CoherenceModel
from gensim.models import Word2Vec
from gensim.models.doc2vec import Doc2Vec
import multiprocessing
from sklearn.model_selection import cross_val_score , GridSearchCV,train_test_split
from sklearn.naive_bayes import MultinomialNB,GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn import svm
import enchant
pd.set_option('display.max_rows', 500)
dict_check = enchant.Dict("en_US")
#### Importing the file ####
Path="src/"
Filename='projects_Preprocessed.csv'
df=pd.read_csv(Path+Filename)
Cat_File="category_hier.csv"
Cat_data=pd.read_csv(Path+Cat_File)
varcluster_file="variable_clusters.csv"
varcluster=pd.read_csv(Path+varcluster_file)
manualtag=pd.read_csv(Path+'SamplesManualTagger.csv')
varcluster_info=pd.read_csv(Path+'variable_clusters_info_v2.csv')
df=df[df['Translates']!="The goal of the Heisenberg Program is to enable outstanding scientists who fulfill all the requirements for a long-term professorship to prepare for a scientific leadership role and to work on further research topics during this time. In pursuing this goal, it is not always necessary to choose and implement project-based procedures. For this reason, in the submission of applications and later in the preparation of final reports - unlike other support instruments - no 'summary' of project descriptions and project results is required. Thus, such information is not provided in GEPRIS."]
## Filtering the null abstracts & short description
df=df[(pd.isnull(df.PreProcessedDescription)==False) & (df.PreProcessedDescription.str.strip()!='abstract available')& (df.PreProcessedDescription.str.len()>100) & (pd.isnull(df["SubjectArea"])==False)]
# Striping the category column
Cat_data.Category=Cat_data.Category.str.strip()
## Merging the high level category information
df=df.merge(Cat_data[["File_Categories","Category"]], how="left", left_on="SubjectArea", right_on="File_Categories")
## if it is interdiscipilinary then -1 otherwise 0 (Normal data)
manualtag['interdiscipilinary']=-1
manualtag.loc[manualtag.apply(lambda x: (x['Category']==x['Category_1']) & (pd.isnull(x['Category_2'])) , axis=1),'interdiscipilinary']=1
```
## 1.1 Word Embedding
```
## Word Embeddings Functions
## Generate the tagged documents (tagging based on the category column)
def create_tagged_document(list_of_list_of_words):
for i, list_of_words in enumerate(list_of_list_of_words):
yield gensim.models.doc2vec.TaggedDocument(list_of_words, [i])
## Generate the tagged documents (each record in single tag )
def create_tagged_document_based_on_tags(list_of_list_of_words, tags):
for i in range(len(list_of_list_of_words)):
yield gensim.models.doc2vec.TaggedDocument(list_of_list_of_words[i], [tags[i]])
def make_bigram(inputlist):
bigram = Phrases(inputlist, min_count=1, threshold=1,delimiter=b' ')
bigram_phraser = Phraser(bigram)
new_list=[]
for sent in inputlist:
new_list.append(bigram_phraser[sent])
return new_list
## Generate output using the word embedding model prediction - takes long time to regenerate
def vec_for_learning(model, tagged_docs):
sents = tagged_docs#.values
targets, regressors = zip(*[(doc.tags[0], model.infer_vector(doc.words, steps=20)) for doc in sents])
return targets, regressors
## creating a tagged document
DescDict=make_bigram([[x for x in str(i).split()] for i in df.PreProcessedDescription])
tagged_value = list(create_tagged_document(DescDict))
print(str(datetime.datetime.now()),'Started')
# Init the Doc2Vec model
model = gensim.models.doc2vec.Doc2Vec(vector_size=50, min_count=5, epochs=40, alpha = 0.02, dm=1, workers=multiprocessing.cpu_count())
#### Hyper parameter ####
## vector_size – Dimensionality of the feature vectors.
## If dm=1, ‘distributed memory’ (PV-DM) (CBOW - similar to continuous bag-of-words)
## alpha - The initial learning rate.
## min_count – Ignores all words with total frequency lower than this.
# Build the Volabulary
model.build_vocab(tagged_value)
model.train(tagged_value, total_examples=len(tagged_value), epochs=40)
print(str(datetime.datetime.now()),'Completed')
## Validating the model response for random words
modelchecked=model
target_word='laptop'
print('target_word: %r model: %s similar words:' % (target_word, modelchecked))
for i, (word, sim) in enumerate(modelchecked.wv.most_similar(target_word, topn=20), 1):
print(' %d. %.2f %r' % (i, sim, word))
```
## 1.2. PCA
```
## PCA - reducing the dimenstion
ps=20
pcamodel = PCA(n_components=ps)
pca=pcamodel.fit_transform(model.docvecs.vectors_docs)
print('PCA components :',ps,'Variance coveragence' ,np.max(pcamodel.explained_variance_ratio_.cumsum())*100)
dummies=pd.get_dummies(df['Category'])
merged_data=pd.concat([df,dummies], axis=1,ignore_index=False)
merged_data=pd.concat([merged_data,pd.DataFrame(pca)], axis=1,ignore_index=False)
SubjectAreaIds=pd.DataFrame(enumerate(merged_data.SubjectArea.unique()),columns=['SubjectAreaId','SubjectArea2'])
finalcols=merged_data.columns.tolist()+['SubjectAreaId']
merged_data=merged_data.merge(SubjectAreaIds, how='left',left_on='SubjectArea',right_on='SubjectArea2')[finalcols]
merged_data=merged_data[pd.isnull(merged_data["Category"])==False]
merged_data['ISOForestCluster']=1
```
# 2. ISO Forest
```
cat='Life Sciences'
FeatureCols=list(range(ps))
CategoricalDS=merged_data[FeatureCols][merged_data.Category==cat]
# n_estimators (default=100) - The number of base estimators in the ensemble.
# max_samples (default=”auto”) - The number of samples to draw from X to train each base estimator. If max_samples is larger than the number of samples provided, all samples will be used for all trees (no sampling)
# max_features (default=1.0) - The number of features to draw from X to train each base estimator.
# contamination(default=’auto’) - The proportion of outliers in the data set. If float, the contamination should be in the range [0, 0.5].
# bootstrap (default=False) - If True, individual trees are fit on random subsets of the training data sampled with replacement. If False, sampling without replacement is performed.
param_dict={'contamination': ['auto',.1,.2],'n_estimators': list(range(1, 100, 5)),'max_features': [10,15,20], 'max_samples': ['auto']}
i_category=[]
i_contamination=[]
i_n_estimators=[]
i_max_features=[]
silhouette_scores=[]
number_outliers=[]
recalls=[]
FeatureCols=list(range(ps))
for cat in merged_data.Category.unique():
CategoricalDS= merged_data[FeatureCols][merged_data.Category==cat]
e=0
for contamination in param_dict['contamination']:
for n_estimators in param_dict['n_estimators']:
for max_features in param_dict['max_features']:
clusterer = IsolationForest(behaviour='new', bootstrap=False, contamination=contamination,
max_features=max_features, max_samples='auto', n_estimators=n_estimators, n_jobs=None,
verbose=0, warm_start=False, random_state=np.random.RandomState(42))
preds = clusterer.fit_predict(CategoricalDS)
pred_uniq=len(pd.Series(preds).unique())
merged_data.loc[merged_data.Category==cat,'ISOForestCluster']=preds
noo=merged_data.loc[(merged_data.Category==cat) & (merged_data.ISOForestCluster==-1),'ISOForestCluster'].count()
if((pred_uniq==2) and (noo>300) and (noo<2000) ) :
score = silhouette_score(CategoricalDS, preds, metric='euclidean')
manualtag_result_1=manualtag[manualtag.Category==cat][['Translates', 'Category_1', 'Category_2','interdiscipilinary']].merge(merged_data,how='left', left_on='Translates', right_on='Translates')[['Translates', 'Category_1', 'Category_2','interdiscipilinary','ISOForestCluster']]
recall=round(metrics.recall_score(manualtag_result_1.interdiscipilinary, manualtag_result_1.ISOForestCluster, pos_label=-1),2)
else:
score=0
recall=0
i_category.append(cat)
i_contamination.append(contamination)
i_n_estimators.append(n_estimators)
i_max_features.append(max_features)
silhouette_scores.append(score)
number_outliers.append(noo)
recalls.append(recall)
e=e+1
print(cat,': contamination -',contamination ,e,'/',len(param_dict['contamination']),'completed')
Param_tuning=pd.DataFrame({
'i_category':i_category,
'i_contamination':i_contamination,
'i_n_estimators':i_n_estimators,
'i_max_features':i_max_features,
'silhouette_scores':silhouette_scores,
'number_outliers':number_outliers,
'recalls':recalls
})
Param_tuning.sort_values(by=['i_category','recalls','silhouette_scores','number_outliers'],ascending=False)
for cat in merged_data.Category.unique():
print(Param_tuning[(Param_tuning['i_category']==cat)].sort_values(by=['recalls','silhouette_scores','number_outliers'],ascending=False).head(5))
Param_tuning.to_csv(Path+'ParamTuning_ISOForestV3.csv', index=False)
bestparam={}
for cat in merged_data.Category.unique():
bestparams=Param_tuning[(Param_tuning['number_outliers']>500) & (Param_tuning['i_category']==cat)].sort_values(by=['recalls','silhouette_scores','number_outliers'],ascending=False).head(1).values[0]
bestparam[cat]={'contamination':bestparams[1],'n_estimators':bestparams[2],'max_features':bestparams[3]}
bestparam
#bestparam= {'Natural Sciences': {'contamination': 'auto',
# 'n_estimators': 91,
# 'max_features': 20},
# 'Humanities and Social Sciences': {'contamination': 0.2,
# 'n_estimators': 26,
# 'max_features': 20},
# 'Engineering Sciences': {'contamination': 0.2,
# 'n_estimators': 51,
# 'max_features': 15},
# 'Life Sciences': {'contamination': 'auto',
# 'n_estimators': 51,
# 'max_features': 15}}
### ISOFORest
FeatureCols=list(range(ps))+['FundingFrom','FundingEnd']
for cat in merged_data.Category.unique():
print(str(datetime.datetime.now()),'Started')
print('******'+cat+'******')
CategoricalDS= merged_data[FeatureCols][merged_data.Category==cat]
IsolationForest(behaviour='new', bootstrap=False, contamination=bestparam[cat]['contamination'],
max_features=bestparam[cat]['max_features'], max_samples='auto', n_estimators=bestparam[cat]['n_estimators'], n_jobs=None,
random_state=np.random.RandomState(42), verbose=0, warm_start=False)
preds = clusterer.fit_predict(CategoricalDS)
merged_data.loc[merged_data.Category==cat,'ISOForestCluster']=preds
print(pd.Series(preds).value_counts())
#noo=merged_data.loc[(merged_data.Category==cat) & (merged_data.DBScanCluster==-1),'DBScanCluster'].count()
score = silhouette_score(CategoricalDS, preds, metric='euclidean')
manualtag_result_1=manualtag[manualtag.Category==cat][['Translates', 'Category_1', 'Category_2','interdiscipilinary']].merge(merged_data,how='left', left_on='Translates', right_on='Translates')[['Translates', 'Category_1', 'Category_2','interdiscipilinary','ISOForestCluster']]
recall=round(metrics.recall_score(manualtag_result_1.interdiscipilinary, manualtag_result_1.ISOForestCluster, pos_label=-1),2)
print('silhouette_score',round(score,2),'recall_score',recall)
print(str(datetime.datetime.now()),'Completed')
print('')
merged_data['ISOForestCluster'].value_counts()
manualtag_result_1=manualtag[['Translates', 'Category_1', 'Category_2','interdiscipilinary']].merge(merged_data,how='left', left_on='Translates', right_on='Translates')[['Translates', 'Category_1', 'Category_2','interdiscipilinary','ISOForestCluster']]
recall=round(metrics.recall_score(manualtag_result_1.interdiscipilinary, manualtag_result_1.ISOForestCluster, pos_label=-1),2)
##Out of all the interdisciplinaries , how much we classified as outlier correctly. It should be high as possible.
print('Overall Recall',recall)
# Print the confusion matrix
print(metrics.confusion_matrix(manualtag_result_1.interdiscipilinary, manualtag_result_1.ISOForestCluster))
# Print the precision and recall, among other metrics
print(metrics.classification_report(manualtag_result_1.interdiscipilinary, manualtag_result_1.ISOForestCluster, digits=2))
## Reseting the index, converting category to int for supervised learning
def CattoID(input_cat):
if(input_cat=='Engineering Sciences'):
return 0
elif(input_cat=='Humanities and Social Sciences'):
return 1
elif(input_cat=='Natural Sciences'):
return 2
elif(input_cat=='Life Sciences'):
return 3
else :
return -1
merged_data=merged_data.reset_index()[merged_data.columns[0:]]
merged_data['CategoryConv']=merged_data.Category.apply(CattoID)
merged_data['CategoryConv']=merged_data['CategoryConv'].astype('int')
```
# 3. Supervised learning
```
validation_part=manualtag[['Translates', 'Category_1', 'Category_2','interdiscipilinary']].merge(merged_data,how='left', left_on='Translates', right_on='Translates')
merged_data['validation_part']=False
merged_data['validation_part'][merged_data['Translates'].isin(manualtag['Translates'])]=True
Features=FeatureCols
merged_data[Features]=MinMaxScaler().fit_transform(merged_data[Features])
OP_Feature='CategoryConv'
## Training & Test data are splitted based on the DBScanCluster result. outlier data are considering as test data to reevaluate.
validation_part=validation_part[(validation_part.ISOForestCluster==-1)]
X_Validation_DS=validation_part[Features]
validation_part['Category_1']=validation_part['Category_1'].apply(CattoID)
validation_part['Category_2']=validation_part['Category_2'].apply(CattoID)
X_Training_DS=merged_data[Features][(merged_data.ISOForestCluster==1) ]
y_Training_DS=merged_data[OP_Feature][(merged_data.ISOForestCluster==1) ]
X_Test_DS=merged_data[Features][(merged_data.ISOForestCluster!=1) & (merged_data['validation_part']==False)]
y_Test_DS=merged_data[OP_Feature][(merged_data.ISOForestCluster!=1) & (merged_data['validation_part']==False)]
X_train, X_test, y_train, y_test = train_test_split(X_Training_DS,y_Training_DS, test_size=0.25, random_state=0)
```
## 3.1 NaiveBayes
```
modelNB = MultinomialNB(alpha=1)
#### Hyper parameter ####
# alpha - Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).
modelNB.fit(X_train, y_train)
nfolds=5
scores=cross_val_score(modelNB, X_Training_DS,y_Training_DS, cv=nfolds, scoring="accuracy")
pd.Series(scores).plot(kind="box", label="Accuracy");
plt.title('Accuracy_score from '+str(nfolds)+' Folds (Accuracy) for '+str(round(pd.Series(scores).mean(), 2)))
y_pred = modelNB.predict(X_test)
print('Accuracy Score : '+str(accuracy_score(y_test,y_pred )*100))
```
## 3.1 k-nearest neighbors
```
kvalue=[]
test_accuracy=[]
test_precision=[]
validation_accuracy=[]
validation_precision=[]
cross_accuray=[]
for k in [4,6,10,16,25,35]:
modelKBC = KNeighborsClassifier(n_neighbors=k, weights='distance')
#### Hyper parameter ####
# n_neighbors - Number of neighbors to use by default for kneighbors queries
# weights - weight function used in prediction (‘distance’ : weight points by the inverse of their distance.
#in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away.)
modelKBC.fit(X_train, y_train)
y_pred = modelKBC.predict(X_test)
val_preds=modelKBC.predict(X_Validation_DS)
validation_part['val_preds']=val_preds
y_Validation_DS=validation_part[['val_preds','Category_1','Category_2']].apply(lambda x: x['Category_2'] if(x['Category_2']== x['val_preds']) else x['Category_1'] ,axis=1)
kvalue.append(k)
test_accuracy.append( round( metrics.accuracy_score(y_test,y_pred),2) )
test_precision.append(round(metrics.precision_score(y_test,y_pred, average='macro'),2) )
validation_accuracy.append( round(metrics.accuracy_score(y_Validation_DS,val_preds),2) )
validation_precision.append( round(metrics.precision_score(y_Validation_DS,val_preds, average='macro'),2) )
#nfolds=3
#scores=cross_val_score(modelKBC, X_train,y_train, cv=nfolds, scoring="accuracy")
#pd.Series(scores).plot(kind="box", label="Accuracy");
#plt.title('Accuracy_score from '+str(nfolds)+' Folds (Accuracy) for '+str(round(pd.Series(scores).mean(), 2)))
#cross_accuray
print('neighbors:',k,'completed')
kNNResult=pd.DataFrame({'kvalue':kvalue, 'test_accuracy':test_accuracy,'test_precision':test_precision, 'validation_accuracy':validation_accuracy,'validation_precision':validation_precision })
kNNResult
k=4
modelKBC = KNeighborsClassifier(n_neighbors=k, weights='distance')
modelKBC.fit(X_train, y_train)
y_pred = modelKBC.predict(X_test)
nfolds=3
scores=cross_val_score(modelKBC, X_train,y_train, cv=nfolds, scoring="accuracy")
pd.Series(scores).plot(kind="box", label="Accuracy");
plt.title('Accuracy_score from '+str(nfolds)+' Folds (Accuracy) for '+str(round(pd.Series(scores).mean(), 2)))
# Print the confusion matrix
print(metrics.confusion_matrix(y_test, y_pred))
# Print the precision and recall, among other metrics
print(metrics.classification_report(y_test, y_pred, digits=3))
val_preds=modelKBC.predict(X_Validation_DS)
validation_part['val_preds']=val_preds
y_Validation_DS=validation_part[['val_preds','Category_1','Category_2']].apply(lambda x: x['Category_2'] if(x['Category_2']== x['val_preds']) else x['Category_1'] ,axis=1)
# Print the confusion matrix
print(metrics.confusion_matrix(y_Validation_DS, val_preds))
# Print the precision and recall, among other metrics
print(metrics.classification_report(y_Validation_DS, val_preds, digits=2))
cvalue=[]
test_accuracy=[]
test_precision=[]
validation_accuracy=[]
validation_precision=[]
cross_accuray=[]
for x in [.01]+list(np.linspace(0.1,100,5))+[10]:
modelSVC = svm.LinearSVC(C=x).fit(X_train, y_train)
#### Hyper parameter ####
# C - The strength of the regularization is inversely proportional to C.
y_pred = modelSVC.predict(X_test)
val_preds=modelSVC.predict(X_Validation_DS)
validation_part['val_preds']=val_preds
y_Validation_DS=validation_part[['val_preds','Category_1','Category_2']].apply(lambda x: x['Category_2'] if(x['Category_2']== x['val_preds']) else x['Category_1'] ,axis=1)
cvalue.append(x)
test_accuracy.append( round( metrics.accuracy_score(y_test,y_pred),2) )
test_precision.append(round(metrics.precision_score(y_test,y_pred, average='macro'),2) )
validation_accuracy.append( round(metrics.accuracy_score(y_Validation_DS,val_preds),2) )
validation_precision.append( round(metrics.precision_score(y_Validation_DS,val_preds, average='macro'),2) )
SVCResult=pd.DataFrame({'cvalue':cvalue, 'test_accuracy':test_accuracy,'test_precision':test_precision, 'validation_accuracy':validation_accuracy,'validation_precision':validation_precision })
SVCResult
```
## 4. Formatting the output categories based on the predict_proba
```
## Based on predict_proba result. reorder to values and categories based on high probablity.
def name_max_value(DF):
colname='Category_1_Values'
if (DF['Engineering Sciences']==DF[colname]):
return 'Engineering Sciences'
elif (DF['Humanities and Social Sciences']==DF[colname]):
return 'Humanities and Social Sciences'
elif (DF['Natural Sciences']==DF[colname]):
return 'Natural Sciences'
elif (DF['Life Sciences']==DF[colname]):
return 'Life Sciences'
else:
return ''
def name_sec_max_value(DF):
colname='Category_2_Values'
if(DF[colname]==0):
return ''
elif ((DF['Engineering Sciences']==DF[colname]) & (DF['Category_1']!='Engineering Sciences')):
return 'Engineering Sciences'
elif ((DF['Humanities and Social Sciences']==DF[colname]) & (DF['Category_1']!='Humanities and Social Sciences')):
return 'Humanities and Social Sciences'
elif ((DF['Natural Sciences']==DF[colname]) & (DF['Category_1']!='Natural Sciences')):
return 'Natural Sciences'
elif ((DF['Life Sciences']==DF[colname]) & (DF['Category_1']!='Life Sciences')):
return 'Life Sciences'
else:
return ''
def name_3rd_max_value(DF):
colname='Category_3_Values'
if(DF[colname]==0):
return ''
elif ((DF['Engineering Sciences']==DF[colname]) & (DF['Category_2']!='Engineering Sciences')):
return 'Engineering Sciences'
elif ((DF['Humanities and Social Sciences']==DF[colname]) & (DF['Category_2']!='Humanities and Social Sciences')):
return 'Humanities and Social Sciences'
elif ((DF['Natural Sciences']==DF[colname]) & (DF['Category_2']!='Natural Sciences')):
return 'Natural Sciences'
elif ((DF['Life Sciences']==DF[colname]) & (DF['Category_2']!='Life Sciences')):
return 'Life Sciences'
else:
return ''
cols=['Engineering Sciences','Humanities and Social Sciences','Natural Sciences','Life Sciences']
PredictedValues=pd.DataFrame(modelKBC.predict_proba(merged_data[Features]), columns=cols)
PredictedValues['Category_1_Values']=PredictedValues[cols].apply(np.max,axis=1)
PredictedValues['Category_2_Values']=PredictedValues[cols].apply(np.sort,axis=1).apply(lambda x:x[2])
PredictedValues['Category_3_Values']=PredictedValues[cols].apply(np.sort,axis=1).apply(lambda x:x[1])
PredictedValues['Category_1']=PredictedValues.apply(name_max_value,axis=1)
PredictedValues['Category_2']=PredictedValues.apply(name_sec_max_value,axis=1)
PredictedValues['Category_3']=PredictedValues.apply(name_3rd_max_value,axis=1)
PredictedValues['Category_12_Variance']=PredictedValues.apply(lambda x :x['Category_1_Values']-x['Category_2_Values'], axis=1)
PredictedValues['Category_23_Variance']=PredictedValues.apply(lambda x :x['Category_2_Values']-x['Category_3_Values'], axis=1)
PredictedValues.loc[PredictedValues['Category_3_Values']<=.15,'Category_3']=''
PredictedValues.loc[PredictedValues['Category_2_Values']<=.15,'Category_2']=''
PredictedValues.loc[PredictedValues['Category_1_Values']>=.80,'Category_2']=''
PredictedValues.loc[PredictedValues['Category_1_Values']>=.80,'Category_3']=''
PredictedValues['Category']=merged_data['Category']
fil_1_2=(PredictedValues['Category_12_Variance']<=.10) & ((PredictedValues['Category_1']==PredictedValues['Category']) | (PredictedValues['Category_2']==PredictedValues['Category']))
fil_2_3=(PredictedValues['Category_23_Variance']<=.10) & ((PredictedValues['Category_3']==PredictedValues['Category']) | (PredictedValues['Category_2']==PredictedValues['Category']))
PredictedValues.loc[(fil_1_2 | fil_2_3) ,'Category_1']=PredictedValues.loc[(fil_1_2 | fil_2_3) ,'Category']
PredictedValues.loc[(fil_1_2 | fil_2_3) ,'Category_2']=''
PredictedValues.loc[(fil_1_2 | fil_2_3) ,'Category_3']=''
```
## 5.1. Manual Validation
```
## regenerating dataset
NewMergedDSAligned=pd.concat([merged_data[merged_data.columns.tolist()[:12]+['ISOForestCluster']],PredictedValues[PredictedValues.columns[4:12]]], axis=1, ignore_index=False)
fil_1_2=(NewMergedDSAligned['Category_12_Variance']<=.10) & ((NewMergedDSAligned['Category_1']==NewMergedDSAligned['Category']) | (NewMergedDSAligned['Category_2']==NewMergedDSAligned['Category']))
fil_2_3=(NewMergedDSAligned['Category_23_Variance']<=.10) & ((NewMergedDSAligned['Category_3']==NewMergedDSAligned['Category']) | (NewMergedDSAligned['Category_2']==NewMergedDSAligned['Category']))
NewMergedDSAligned.loc[(fil_1_2 | fil_2_3) ,'Category_1']=NewMergedDSAligned.loc[(fil_1_2 | fil_2_3) ,'Category']
NewMergedDSAligned.loc[(fil_1_2 | fil_2_3) ,'Category_2']=''
NewMergedDSAligned.loc[(fil_1_2 | fil_2_3) ,'Category_3']=''
NewMergedDSAligned['ISOForestCluster'].value_counts()
#(NewMergedDSAligned.ISOForestCluster!=0) &
NewMergedDSAligned['ISOForestCluster'][ (NewMergedDSAligned['Category']!=NewMergedDSAligned['Category_1'])].value_counts()
NewMergedDSAligned['Category'][(NewMergedDSAligned.ISOForestCluster!=1) & (NewMergedDSAligned['Category']!=NewMergedDSAligned['Category_1'])].value_counts()
cats='Natural Sciences'
lim=20
NewMergedDSAligned[['Translates','Category']+NewMergedDSAligned.columns[13:].tolist()][(NewMergedDSAligned['Category_1']!=cats) & (NewMergedDSAligned['Category']==cats) & (NewMergedDSAligned.ISOForestCluster!=1) & (NewMergedDSAligned['Category']!=NewMergedDSAligned['Category_1'])].sort_values('Category_1_Values', ascending=False).head(lim).tail(5)
#cats='Humanities and Social Sciences'
NewMergedDSAligned[['Translates','Category_1_Values']][(NewMergedDSAligned['Category_1']!=cats) & (NewMergedDSAligned['Category']==cats) & (NewMergedDSAligned.ISOForestCluster!=1) & (NewMergedDSAligned['Category']!=NewMergedDSAligned['Category_1'])].sort_values('Category_1_Values', ascending=False).Translates.head(lim).tail(5).tolist()#.tail().
#NewMergedDSAligned.to_csv(Path+'WEPCAISOFindingsKMeans.csv', index=False)
```
## 5.2. Validation with manual taggings
```
## regenerating dataset
MergeValidation=pd.concat([merged_data[['ISOForestCluster','Translates']],PredictedValues[PredictedValues.columns[4:]]], axis=1, ignore_index=False)
MergeValidationResult_2=manualtag[['Category','Translates', 'Category_1', 'Category_2','interdiscipilinary']].merge(MergeValidation,how='left', left_on='Translates', right_on='Translates',suffixes= ('_Actual','_Pred'))
## Function to take a better results for valudation
def rebuilt_ip(x):
final_pred=x['Category_1_Pred']
final_actual=x['Category_1_Actual']
result=0
if(x['Category_1_Pred'] == x['Category_1_Actual']):
final_pred=x['Category_1_Pred']
final_actual=x['Category_1_Actual']
result=1
elif(x['Category_1_Pred'] == x['Category_2_Actual']):
final_pred=x['Category_1_Pred']
final_actual=x['Category_2_Actual']
result=1
elif(x['Category_2_Pred'] == x['Category_1_Actual']):
final_pred=x['Category_2_Pred']
final_actual=x['Category_1_Actual']
result=.66
elif(x['Category_2_Pred'] == x['Category_2_Actual']):
final_pred=x['Category_2_Pred']
final_actual=x['Category_2_Actual']
result=.66
elif(x['Category_3'] == x['Category_1_Actual']):
final_pred=x['Category_2_Pred']
final_actual=x['Category_1_Actual']
result=.33
elif(x['Category_3'] == x['Category_2_Actual']):
final_pred=x['Category_2_Pred']
final_actual=x['Category_2_Actual']
result=.33
# if it is not an outlier assigning a original data sets
if(x['ISOForestCluster']!=-1):
final_pred=x['Category_Actual']
final_actual=x['Category_1_Actual']
result=-1
return pd.Series({'pred':final_pred,'actual':final_actual,'result':result})
MergeValidationResult_3=pd.concat([MergeValidationResult_2,pd.DataFrame(MergeValidationResult_2.apply(rebuilt_ip, axis=1))], axis=1)
MergeValidationResult_3['Match']=MergeValidationResult_3.apply(lambda x: 'Correct' if(x['pred']==x['Category_Actual']) else 'interdiscipilinary' , axis=1)
MergeValidationResult_3.groupby(['ISOForestCluster','Match']).count()[['Translates']]
# Print the confusion matrix
print(metrics.confusion_matrix(MergeValidationResult_3.actual, MergeValidationResult_3.pred))
# Print the precision and recall, among other metrics
print(metrics.classification_report(MergeValidationResult_3.actual, MergeValidationResult_3.pred, digits=3))
```
## 5.3. Each category TF/IDF based result evaluvation
```
#&(NewMergedDSAligned['Category']==cats) &(NewMergedDSAligned['Category_1']==check_cat)
input_data=NewMergedDSAligned[(NewMergedDSAligned['Category']!=NewMergedDSAligned['Category_1']) & (NewMergedDSAligned.ISOForestCluster!=1) ]
input_data.loc[:,'CategoryCollc']=input_data[['Category','Category_1','Category_2','Category_3']].apply(lambda x:x[0]+','+x[1]+','+x[2]+','+x[3], axis=1)
#input_data.loc[:,'CategoryCollc']=input_data[['Category','Category_1']].apply(lambda x:x[0]+','+x[1], axis=1)
input_data.loc[:,'CategoryCollc']=input_data['CategoryCollc'].str.strip(",")
varcluster_info.cluster_id=varcluster_info.cluster_id.astype('int32')
varclusterall=varcluster.merge(varcluster_info, how='left',left_on='Cluster', right_on='cluster_id')
varclusterall=varclusterall[varclusterall.RS_Ratio<.98]
def find_category(target_word):
try :
sim_word=list(map(lambda x:x[0] ,modelchecked.wv.most_similar(target_word, topn=5)))
finalcategory=varclusterall[varclusterall.Variable.isin(sim_word)].category.value_counts().sort_values(ascending=False).head(1).index
if(len(finalcategory)>0):
return finalcategory[0]
else:
return np.NaN
except :
return np.NaN
sizes=len(input_data.CategoryCollc.unique())
category_tfidfs=pd.DataFrame()
with tqdm(total=len(input_data['CategoryCollc'].unique())) as bar:
for i,bucket in input_data.groupby(['CategoryCollc']):
varcat=pd.DataFrame()
vectorizer = TfidfVectorizer(max_features=20, ngram_range=(1, 1))
review_vectors = vectorizer.fit_transform(bucket["PreProcessedDescription"])
features_df = pd.DataFrame(review_vectors.toarray(), columns = vectorizer.get_feature_names())
varcat=pd.DataFrame(features_df.sum().sort_values(ascending=False)).merge(varclusterall, how='left', left_index=True, right_on='Variable')[[0,'Variable','category']]
varcat.category=varcat[['Variable', 'category']].apply(lambda x: find_category(x.Variable) if(pd.isnull(x['category'])) else x['category'], axis=1)
varcat['bucket_length']=len(bucket)
varcat['bucket_category']=bucket['Category'].unique()[0]
varcat['Category_1']=bucket['Category_1'].unique()[0]
varcat['Category_2']=bucket['Category_2'].unique()[0]
varcat['Category_3']=bucket['Category_3'].unique()[0]
varcat['Category_1_Score']=bucket['Category_1_Values'].mean()
varcat['Category_2_Score']=bucket['Category_2_Values'].mean()
varcat['Category_3_Score']=bucket['Category_3_Values'].mean()
varcat=varcat.reset_index()
category_tfidfs=pd.concat([varcat[varcat.columns[1:]],category_tfidfs])
bar.update(1)
category_tfidfs.to_csv(Path+'CategoryTFIDFSummary_WEPCAISOForestFindingsKMeansV3.csv', index=False)
```
# Visualization
```
def CattoID(input_cat):
if(input_cat=='Engineering Sciences'):
return 0
elif(input_cat=='Humanities and Social Sciences'):
return 1
elif(input_cat=='Natural Sciences'):
return 2
elif(input_cat=='Life Sciences'):
return 3
else :
return -1
NewMergedDSAligned2=pd.concat([merged_data,PredictedValues[PredictedValues.columns[4:12]]], axis=1, ignore_index=False)
NewMergedDSAligned2.loc[:,'Category_1_ID']=NewMergedDSAligned2.Category_1.apply(CattoID)
NewMergedDSAligned2.loc[:,'Category_2_ID']=NewMergedDSAligned2.Category_2.apply(CattoID)
NewMergedDSAligned2.loc[:,'Category_3_ID']=NewMergedDSAligned2.Category_3.apply(CattoID)
NewMergedDSAligned2=pd.DataFrame(enumerate(NewMergedDSAligned2.SubjectArea.unique()), columns=['Subjectid','SubjectAreaMatching']).merge(NewMergedDSAligned2,left_on='SubjectAreaMatching', right_on='SubjectArea')
cats=['Engineering Sciences','Humanities and Social Sciences', 'Life Sciences','Natural Sciences']
cats_dist=[]
## Finiding the overall similiarity
for c, w in NewMergedDSAligned2[(NewMergedDSAligned2['Category']!=NewMergedDSAligned2['Category_1']) & (NewMergedDSAligned2['ISOForestCluster']!=1)].groupby('Category'):
#print('')
#print(c, len(w))
#other_cat=list(filter(lambda x:x!=c, cats))
cat_dist=[]
for oc in cats:
if oc==c:
oc_sim=0
else:
oc_sum=sum(w[w['Category_1']==oc].Category_1_Values.tolist()+w[w['Category_2']==oc].Category_2_Values.tolist()+w[w['Category_3']==oc].Category_3_Values.tolist())
oc_sim=oc_sum/len(w)
cat_dist.append(oc_sim)
#print(c,':',oc,'-', round(oc_sim,2))
#oc_sum=w[w['Category_1']==oc].Category_1_Values.tolist()+w[w['Category_2']==oc].Category_2_Values.tolist()+w[w['Category_3']==oc].Category_3_Values.tolist()
#oc_sim=sum(oc_sum)/len(oc_sum)
#print(c,':',oc,'-', round(oc_sim,2))
cats_dist.append(np.array(cat_dist))
cats_dist=np.array(cats_dist)
## Making symmetric matrix
sym_dist=np.zeros(cats_dist.shape)
for i in range(cats_dist.shape[0]):
for j in range(cats_dist.shape[0]):
sym_dist[i][j]=(cats_dist[i][j]+ cats_dist[j][i])/2
if(i==j):
sym_dist[i][j]=1
# 1-x : convert similiarity to distance
sym_dist=1-pd.DataFrame(sym_dist, columns=cats, index=cats)
## Generating coordinates from distance
#, angle=0.8
#coords = TSNE(n_components=2,perplexity=.1, random_state=12, metric='precomputed').fit_transform(sym_dist)
#coords = TSNE(n_components=2,perplexity=4.2, random_state=18, metric='precomputed').fit_transform(sym_dist)
coords = PCA(n_components=2).fit_transform(sym_dist)
coords=MinMaxScaler([0,1000]).fit_transform(coords)
coords=pd.DataFrame(coords, index=cats).reset_index()
p1=sns.scatterplot(
x=0, y=1,
hue="index",
# palette=sns.color_palette("hls", 4),
data=coords,
# legend="full",
alpha=1,
size = 8,
legend=False
);
for line in range(0,coords.shape[0]):
p1.text(coords[0][line]+0.01, coords[1][line], cats[line], horizontalalignment='left', size='medium', color='black')
sym_dist
newrange=pd.DataFrame(NewMergedDSAligned2.Category.value_counts()/80).reset_index().merge(coords,left_on='index',right_on='index')
newrange.loc[:,'Min_X']=newrange[0]-newrange['Category']
newrange.loc[:,'Max_X']=newrange[0]+newrange['Category']
newrange.loc[:,'Min_Y']=newrange[1]-(newrange['Category']*.60)
newrange.loc[:,'Max_Y']=newrange[1]+(newrange['Category']*.60)
newrange.columns=['Category','size', 0, 1, 'Min_X', 'Max_X', 'Min_Y', 'Max_Y']
newrange
catsperplexity={'Engineering Sciences':5,'Humanities and Social Sciences':5, 'Life Sciences':10,'Natural Sciences':8}
## T-SNE separately for each categories
outerclusterfeatures=['Category_1_Values','Category_1_ID','Category_2_ID','Category_2_Values','Category_3_ID','Category_3_Values','Subjectid']
#Doc2VecModelData=pd.concat([pd.DataFrame(model.docvecs.vectors_docs),NewMergedDSAligned2[outerclusterfeatures]], axis=1)
Doc2VecModelData=pd.concat([pd.DataFrame(pca[:,:10]),NewMergedDSAligned2[outerclusterfeatures]], axis=1)
Doc2VecModelData['tsne-2d-one']=0
Doc2VecModelData['tsne-2d-two']=0
for cat in cats:#['Life Sciences']:#
print(str(datetime.datetime.now()),'Started for', cat)
tsne = TSNE(n_components=2, perplexity=catsperplexity[cat], n_iter=300, random_state=0, learning_rate=100)
## The perplexity is related to the number of nearest neighbors that is used in other manifold learning algorithms.
## Larger datasets usually require a larger perplexity. Consider selecting a value between 5 and 50.
tsne_results = tsne.fit_transform(Doc2VecModelData[NewMergedDSAligned2.Category==cat])
Doc2VecModelData.loc[NewMergedDSAligned2.Category==cat,'tsne-2d-one'] = tsne_results[:,0]
Doc2VecModelData.loc[NewMergedDSAligned2.Category==cat,'tsne-2d-two'] = tsne_results[:,1]
print(str(datetime.datetime.now()),'Completed for', cat)
Doc2VecModelData.loc[:,'Category'] = NewMergedDSAligned2.Category
Doc2VecModelData.loc[:,'Category_1'] = NewMergedDSAligned2.Category_1
# Reshaping
for cat in cats:
model_x=MinMaxScaler([newrange[newrange['Category']==cat].Min_X.values[0],newrange[newrange['Category']==cat].Max_X.values[0]])
Doc2VecModelData.loc[Doc2VecModelData['Category']==cat,'tsne-2d-one']=model_x.fit_transform(Doc2VecModelData[Doc2VecModelData['Category']==cat][['tsne-2d-one']])
model_y=MinMaxScaler([newrange[newrange['Category']==cat].Min_Y.values[0],newrange[newrange['Category']==cat].Max_Y.values[0]])
Doc2VecModelData.loc[Doc2VecModelData['Category']==cat,'tsne-2d-two']=model_y.fit_transform(Doc2VecModelData[Doc2VecModelData['Category']==cat][['tsne-2d-two']])
cat='Life Sciences'#'Engineering Sciences'#'Life Sciences'#'Humanities and Social Sciences'#'Life Sciences'#'
plt.figure(figsize=(13,8))
sns.scatterplot(
x="tsne-2d-one", y="tsne-2d-two",
hue="Category_1",
data=Doc2VecModelData[Doc2VecModelData.Category==cat],
legend="full",
# style='Category_1',
alpha=0.8
);
plt.figure(figsize=(13,8))
sns.scatterplot(
x="tsne-2d-one", y="tsne-2d-two",
hue="Category_1",
data=Doc2VecModelData,
legend="full",
style='Category',
alpha=0.8
);
def label_genarator(input):
if((input.Category==input.Category_1) or (input.ISOForestCluster==1)):
return ''#'Category : '+input.Category
else:
if((input.Category_3_Values==0) and (input.Category_2_Values==0)):
return '('+input.Category_1+' '+str(round(input.Category_1_Values*100))+'%'+')'
elif((input.Category_3_Values==0) and (input.Category_2_Values!=0)):
return '('+input.Category_1+' '+str(round(input.Category_1_Values*100))+'%, '+input.Category_2+' '+str(round(input.Category_2_Values*100))+'%)'
else:
return '('+input.Category_1+' '+str(round(input.Category_1_Values*100))+'%, '+input.Category_2+' '+str(round(input.Category_2_Values*100))+'%, '+input.Category_3+' '+str(round(input.Category_3_Values*100))+'%)'
Report_extrat=pd.concat([NewMergedDSAligned2[['Name','Institution','FundingFrom','FundingEnd', 'Category','Category_1_Values','Category_2_Values','Category_3_Values','Category_1','Category_2','Category_3','ISOForestCluster']],Doc2VecModelData[['tsne-2d-one', 'tsne-2d-two']]], axis=1)
Report_extrat['ProjectURL']=NewMergedDSAligned2.SubUrl.apply(lambda x:'https://gepris.dfg.de'+x)
Report_extrat['label']=Report_extrat.apply(label_genarator, axis=1)
Report_extrat['interdiscipilinary']=False
Report_extrat.loc[(Report_extrat.label!='') & (Report_extrat['ISOForestCluster']!=1),'interdiscipilinary']=True
Report_extrat['color']=Report_extrat['Category']
Report_extrat.loc[Report_extrat['interdiscipilinary'],'color']=Report_extrat.loc[Report_extrat['interdiscipilinary'],'Category_1']
Report_extrat.to_csv(Path+'Report_WEPCAISOForestFindingsKMeansV3.csv', index=False)
newrange.to_csv(Path+'CATRANGE_WEPCAISOForestFindingsKMeansV3.csv', index=False)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/DingLi23/s2search/blob/pipelining/pipelining/exp-cscv/exp-cscv_cscv_1w_ale_plotting.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Experiment Description
> This notebook is for experiment \<exp-cscv\> and data sample \<cscv\>.
### Initialization
```
%load_ext autoreload
%autoreload 2
import numpy as np, sys, os
in_colab = 'google.colab' in sys.modules
# fetching code and data(if you are using colab
if in_colab:
!rm -rf s2search
!git clone --branch pipelining https://github.com/youyinnn/s2search.git
sys.path.insert(1, './s2search')
%cd s2search/pipelining/exp-cscv/
pic_dir = os.path.join('.', 'plot')
if not os.path.exists(pic_dir):
os.mkdir(pic_dir)
```
### Loading data
```
sys.path.insert(1, '../../')
import numpy as np, sys, os, pandas as pd
from getting_data import read_conf
from s2search_score_pdp import pdp_based_importance
sample_name = 'cscv'
f_list = [
'title', 'abstract', 'venue', 'authors',
'year',
'n_citations'
]
ale_xy = {}
ale_metric = pd.DataFrame(columns=['feature_name', 'ale_range', 'ale_importance', 'absolute mean'])
for f in f_list:
file = os.path.join('.', 'scores', f'{sample_name}_1w_ale_{f}.npz')
if os.path.exists(file):
nparr = np.load(file)
quantile = nparr['quantile']
ale_result = nparr['ale_result']
values_for_rug = nparr.get('values_for_rug')
ale_xy[f] = {
'x': quantile,
'y': ale_result,
'rug': values_for_rug,
'weird': ale_result[len(ale_result) - 1] > 20
}
if f != 'year' and f != 'n_citations':
ale_xy[f]['x'] = list(range(len(quantile)))
ale_xy[f]['numerical'] = False
else:
ale_xy[f]['xticks'] = quantile
ale_xy[f]['numerical'] = True
ale_metric.loc[len(ale_metric.index)] = [f, np.max(ale_result) - np.min(ale_result), pdp_based_importance(ale_result, f), np.mean(np.abs(ale_result))]
# print(len(ale_result))
print(ale_metric.sort_values(by=['ale_importance'], ascending=False))
print()
```
### ALE Plots
```
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.ticker import MaxNLocator
categorical_plot_conf = [
{
'xlabel': 'Title',
'ylabel': 'ALE',
'ale_xy': ale_xy['title']
},
{
'xlabel': 'Abstract',
'ale_xy': ale_xy['abstract']
},
{
'xlabel': 'Authors',
'ale_xy': ale_xy['authors'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 14],
# }
},
{
'xlabel': 'Venue',
'ale_xy': ale_xy['venue'],
# 'zoom': {
# 'inset_axes': [0.3, 0.3, 0.47, 0.47],
# 'x_limit': [89, 93],
# 'y_limit': [-1, 13],
# }
},
]
numerical_plot_conf = [
{
'xlabel': 'Year',
'ylabel': 'ALE',
'ale_xy': ale_xy['year'],
# 'zoom': {
# 'inset_axes': [0.15, 0.4, 0.4, 0.4],
# 'x_limit': [2019, 2023],
# 'y_limit': [1.9, 2.1],
# },
},
{
'xlabel': 'Citations',
'ale_xy': ale_xy['n_citations'],
# 'zoom': {
# 'inset_axes': [0.4, 0.65, 0.47, 0.3],
# 'x_limit': [-1000.0, 12000],
# 'y_limit': [-0.1, 1.2],
# },
},
]
def pdp_plot(confs, title):
fig, axes_list = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100)
subplot_idx = 0
plt.suptitle(title, fontsize=20, fontweight='bold')
# plt.autoscale(False)
for conf in confs:
axes = axes if len(confs) == 1 else axes_list[subplot_idx]
sns.rugplot(conf['ale_xy']['rug'], ax=axes, height=0.02)
axes.axhline(y=0, color='k', linestyle='-', lw=0.8)
axes.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axes.grid(alpha = 0.4)
# axes.set_ylim([-2, 20])
axes.xaxis.set_major_locator(MaxNLocator(integer=True))
axes.yaxis.set_major_locator(MaxNLocator(integer=True))
if ('ylabel' in conf):
axes.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10)
# if ('xticks' not in conf['ale_xy'].keys()):
# xAxis.set_ticklabels([])
axes.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10)
if not (conf['ale_xy']['weird']):
if (conf['ale_xy']['numerical']):
axes.set_ylim([-1.5, 1.5])
pass
else:
axes.set_ylim([-7, 19])
pass
if 'zoom' in conf:
axins = axes.inset_axes(conf['zoom']['inset_axes'])
axins.xaxis.set_major_locator(MaxNLocator(integer=True))
axins.yaxis.set_major_locator(MaxNLocator(integer=True))
axins.plot(conf['ale_xy']['x'], conf['ale_xy']['y'])
axins.set_xlim(conf['zoom']['x_limit'])
axins.set_ylim(conf['zoom']['y_limit'])
axins.grid(alpha=0.3)
rectpatch, connects = axes.indicate_inset_zoom(axins)
connects[0].set_visible(False)
connects[1].set_visible(False)
connects[2].set_visible(True)
connects[3].set_visible(True)
subplot_idx += 1
pdp_plot(categorical_plot_conf, f"ALE for {len(categorical_plot_conf)} categorical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight')
pdp_plot(numerical_plot_conf, f"ALE for {len(numerical_plot_conf)} numerical features")
# plt.savefig(os.path.join('.', 'plot', f'{sample_name}-1wale-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight')
```
|
github_jupyter
|
```
import mcvine
from instrument.geometry.pml import weave
from instrument.geometry import operations,shapes
import math
import os, sys
parent_dir = os.path.abspath(os.pardir)
libpath = os.path.join(parent_dir, 'c3dp_source')
figures_path = os.path.join (parent_dir, 'figures')
sample_path = os.path.join (parent_dir, 'sample')
if not libpath in sys.path:
sys.path.insert(0, libpath)
# sys.path.insert(0, '/home/fi0/python3/lib/python3.5/site-packages')
import SCADGen.Parser
from collimator_zigzagBlade_old import Collimator_geom
from DAC_geo import DAC
# import viewscad
# import solid
scad_flag = True ########CHANGE CAD FLAG HERE
if scad_flag is True:
savepath = figures_path
else:
savepath = sample_path
dac=DAC()
anvil=dac.anvil()
# gasket=dac.gasket()
# gasket_holder=dac.gasket_holder()
sorrounding_gasket=dac.sorrounding_gasket()
gasket_at_sample = dac.gasket_contact_with_sample()
gasket_at_anvil = dac.gasket_contact_with_anvil()
snap_seat=dac.snap_seat()
vision_seat = dac.vision_seat()
snap_piston=dac.snap_piston()
vision_piston = dac.vision_piston()
snap_seat_pistion=dac.snap_seat_piston()
vision_seat_pistion=dac.vision_seat_piston()
bar=dac.body_bar()
sample=dac.sample()
cell=operations.unite(operations.unite(operations.unite(anvil,gasket_at_anvil),gasket_at_sample),
vision_seat_pistion)
colimator_front_end_from_center= 74. #though the cell diameter is 3 mm, I can not put the collimator at 3 mm because
#if I put at 3 mm, there will be no blade (only full of channels as the minimum
# channel thickness is 3 mm)
length_of_each_part=60.
########################### LAST PART COMPONENTS ##############################
coll1_length=length_of_each_part
channel1_length=length_of_each_part
min_channel_wall_thickness=1.
minimum_channel_size = 3.
coll1_height_detector=150.
coll1_width_detector=60*2.
coll1_height_detector_right=coll1_height_detector+20.
coll1_front_end_from_center=colimator_front_end_from_center+(2.*length_of_each_part)
print ('last collimator front end from center', coll1_front_end_from_center)
coll1_length_fr_center=coll1_front_end_from_center+coll1_length
print ('last collimator back end from center', coll1_length_fr_center)
import numpy as np
wall_angular_thickness=2*(np.rad2deg(np.arctan((min_channel_wall_thickness/2.)/coll1_length_fr_center)))
print ('wall angular thickness', wall_angular_thickness)
channel_angular_thickness=2*(np.rad2deg(np.arctan((minimum_channel_size/2.)/coll1_length_fr_center)))
print ('channel angular thickness', channel_angular_thickness)
########################### FIRST PART ##############################
coll3_length=length_of_each_part
channel3_length=length_of_each_part
coll3_inner_radius=colimator_front_end_from_center+(0.*length_of_each_part)
print ('inner radius',coll3_inner_radius)
coll3_outer_radius=coll3_length+coll3_inner_radius
print ('outer radius', coll3_outer_radius)
coll3_channel_gap_at_detector = (minimum_channel_size/coll3_inner_radius)*coll3_outer_radius
print ('minimum channel gap at big end', coll3_channel_gap_at_detector)
coll3_height_detector=(coll1_height_detector/coll1_length_fr_center)*coll3_outer_radius
coll3_height_detector_right=(coll1_height_detector_right/coll1_length_fr_center)*coll3_outer_radius
print ('height detector', coll3_height_detector)
coll3_width_detector=(coll1_width_detector/coll1_length_fr_center)*coll3_outer_radius #half part
# coll3_width_detector=(coll1_width_detector/coll1_length_fr_center)*coll3_outer_radius*2 #full part
print ('width detector', coll3_width_detector)
vertical_odd_blades= True
horizontal_odd_blades =True
coll3 = Collimator_geom()
coll3.set_constraints(max_coll_height_detector=coll3_height_detector,
max_coll_width_detector=coll3_width_detector,
min_channel_wall_thickness=min_channel_wall_thickness,
max_coll_length=coll3_length,
min_channel_size=3,
collimator_front_end_from_center=coll3_inner_radius,
# remove_vertical_blades_manually =True, #only full part
# vertical_blade_index_list_toRemove = [7],#only full part
# remove_horizontal_blades_manually = True, #only full part
# horizontal_blade_index_list_toRemove = [9], #only full part
collimator_parts=False,
no_right_border= True,
no_top_border = False,
horizontal_odd_blades = False,
vertical_odd_blades = False,
)
horizontal_acceptance_angle = coll3.horizontal_acceptance_angle
print ('horizontal acceptance angle', coll3.horizontal_acceptance_angle)
print ('vertical acceptance angle' , coll3.vertical_acceptance_angle)
rotation_angle_for_right_parts = horizontal_acceptance_angle/2.
fist_vertical_number_blades = math.floor (coll3.Vertical_number_channels(channel3_length))
fist_horizontal_number_blades = math.floor(coll3.Horizontal_number_channels(channel3_length))
print ('vertical #channels' , fist_vertical_number_blades)
print ('horizontal # channels' , fist_horizontal_number_blades)
if fist_vertical_number_blades %2 ==0:
fist_vertical_number_blades-=1
if fist_horizontal_number_blades %2 ==0:
fist_horizontal_number_blades-=1
# number_ch_vertical =int(fist_vertical_number_blades)
# fist_vertical_number_blades=is_prime (number_ch_vertical)
# if fist_vertical_number_blades is None :
# fist_vertical_number_blades=find_previous_prime(number_ch_vertical)
# number_ch_horizontal =int(fist_horizontal_number_blades)
# fist_horizontal_number_blades=is_prime (number_ch_horizontal)
# if fist_horizontal_number_blades is None :
# fist_horizontal_number_blades=find_previous_prime(number_ch_horizontal)
print ('modified vertical #channels' , fist_vertical_number_blades)
print ('modified horizontal # channels' , fist_horizontal_number_blades)
# if vertical_odd_blades:
# if fist_vertical_number_blades %2 != 0:
# fist_vertical_number_blades-= 1
# else:
# if fist_vertical_number_blades %2 ==0:
# fist_vertical_number_blades-=1
# if horizontal_odd_blades:
# if fist_horizontal_number_blades %2 != 0:
# fist_horizontal_number_blades-= 1
# else:
# if fist_horizontal_number_blades %2 ==0:
# fist_horizontal_number_blades-=1
# coll3.set_parameters(vertical_number_channels=28,horizontal_number_channels=11*2,
# channel_length =channel3_length) # the full first part
# coll3_R.set_parameters(vertical_number_channels=28,horizontal_number_channels=11*2
# ,channel_length =channel3_length)
# fist_vertical_number_blades =3
# fist_horizontal_number_blades =3
coll3.set_parameters(vertical_number_channels=fist_vertical_number_blades,horizontal_number_channels=fist_horizontal_number_blades,
channel_length =channel3_length)
print ('vertical channel angle :' ,coll3.vertical_channel_angle)
print ('horizontal channel angle :' ,coll3.horizontal_channel_angle)
col_first = coll3.gen_one_col(collimator_Nosupport=True)
# coli_first_right = coll3_R.gen_collimators(detector_angles=[180.+ 12],multiple_collimator=False, collimator_Nosupport=True)
########################## MIDDLE PART #########################################
testing_distance = 0
coll2_length=length_of_each_part
channel2_length=length_of_each_part
coll2_inner_radius=colimator_front_end_from_center+(1.*length_of_each_part) + testing_distance
print ('inner radius', coll2_inner_radius)
coll2_outer_radius=coll2_length+coll2_inner_radius
print ('outer radius', coll2_outer_radius)
coll2_channel_gap_at_detector = (minimum_channel_size/coll2_inner_radius)*coll2_outer_radius
print ('minimum channel gap at big end', coll2_channel_gap_at_detector)
coll2_height_detector=(coll1_height_detector/coll1_length_fr_center)*coll2_outer_radius
coll2_height_detector_right=(coll1_height_detector_right/coll1_length_fr_center)*coll2_outer_radius
print ('collimator height at detector', coll2_height_detector)
coll2_width_detector=(coll1_width_detector/coll1_length_fr_center)*coll2_outer_radius
print ('coll2_width_detector', coll2_width_detector)
coll2_channel_index_to_remove = int (coll3_channel_gap_at_detector/minimum_channel_size)
print ('channel index to remove' ,coll2_channel_index_to_remove)
coll2 = Collimator_geom()
coll2.set_constraints(max_coll_height_detector=coll2_height_detector,
max_coll_width_detector=coll2_width_detector,
min_channel_wall_thickness=min_channel_wall_thickness,
max_coll_length=coll2_length,
min_channel_size=3,
collimator_front_end_from_center=coll2_inner_radius,
collimator_parts=True,
initial_collimator_horizontal_channel_angle=0.0,
initial_collimator_vertical_channel_angle= 0.0,
remove_vertical_blades_manually =True,
# vertical_blade_index_list_toRemove = [2,5],
# remove_horizontal_blades_manually =True,
# horizontal_blade_index_list_toRemove = [2,5],
no_right_border= True,
no_top_border = False,
vertical_even_blades= False,
horizontal_even_blades= False)
middle_vertical_number_blades = math.floor (coll2.Vertical_number_channels(channel2_length))
middle_horizontal_number_blades = math.floor(coll2.Horizontal_number_channels(channel2_length))
print ('vertical # chanels', coll2.Vertical_number_channels(channel2_length))
print ('horizontal # channels', coll2.Horizontal_number_channels(channel2_length))
if middle_vertical_number_blades %2 ==0:
middle_vertical_number_blades-=1
if middle_horizontal_number_blades %2 ==0:
middle_horizontal_number_blades-=1
# number_ch_vertical_middle =int(middle_vertical_number_blades)
# middle_vertical_number_blades=is_prime (number_ch_vertical_middle)
# if middle_vertical_number_blades is None :
# middle_vertical_number_blades=find_previous_prime(number_ch_vertical_middle)
# number_ch_horizontal_middle =int(middle_horizontal_number_blades)
# middle_horizontal_number_blades=is_prime (number_ch_horizontal_middle)
# if middle_horizontal_number_blades is None :
# middle_horizontal_number_blades=find_previous_prime(number_ch_horizontal_middle)
print ('modified vertical #channels' , middle_vertical_number_blades)
print ('modified horizontal # channels' , middle_horizontal_number_blades)
coll2.set_parameters(vertical_number_channels=(fist_vertical_number_blades),horizontal_number_channels=(fist_horizontal_number_blades),
channel_length =channel2_length)
print ('vertical channel angle :' ,coll2.vertical_channel_angle)
print ('horizontal channel angle :' ,coll2.horizontal_channel_angle)
coli_middle = coll2.gen_one_col(collimator_Nosupport=True)
# print (coll1_height_detector/coll2_height_detector)
#################### LAST PARTS ################################
# coll1_channel_index_to_remove = int (coll2_channel_gap_at_detector/minimum_channel_size)
# print ('channel index to remove' ,coll1_channel_index_to_remove)
col_last_left = Collimator_geom()
col_last_left.set_constraints(max_coll_height_detector=coll1_height_detector,
max_coll_width_detector=coll1_width_detector,
min_channel_wall_thickness=min_channel_wall_thickness,
max_coll_length=coll1_length,
min_channel_size=3.,
collimator_front_end_from_center=coll1_front_end_from_center,
# remove_horizontal_blades_manually =True,
# horizontal_blade_index_list_toRemove = [2,5,11,14,20,23],
# remove_vertical_blades_manually =True,
# vertical_blade_index_list_toRemove = [2,5,11,14,20,23],
# collimator_parts=True,
no_right_border= True,
no_top_border = False,
vertical_odd_blades=False,
horizontal_odd_blades=False )
last_vertical_number_blades = math.floor (col_last_left.Vertical_number_channels(channel1_length))
last_horizontal_number_blades = math.floor(col_last_left.Horizontal_number_channels(channel1_length))
print ('vertical # channels', col_last_left.Vertical_number_channels(channel1_length))
print ('horizontal # channels' , col_last_left.Horizontal_number_channels(channel1_length))
# if last_vertical_number_blades %2 ==0:
# last_vertical_number_blades-=1
# if last_horizontal_number_blades %2 ==0:
# last_horizontal_number_blades-=1
# number_ch_vertical_last =int(last_vertical_number_blades)
# last_vertical_number_blades=is_prime (number_ch_vertical_last)
# if last_vertical_number_blades is None :
# last_vertical_number_blades=find_previous_prime(number_ch_vertical_last)
# number_ch_horizontal_last =int(last_horizontal_number_blades)
# last_horizontal_number_blades=is_prime (number_ch_horizontal_last)
# if last_horizontal_number_blades is None :
# last_horizontal_number_blades=find_previous_prime(number_ch_horizontal_last)
print ('modified vertical #channels' , last_vertical_number_blades)
print ('modified horizontal # channels' , last_horizontal_number_blades)
col_last_left.set_parameters(vertical_number_channels=fist_vertical_number_blades,horizontal_number_channels=fist_horizontal_number_blades,
channel_length =channel1_length)
print ('vertical channel angle :' ,col_last_left.vertical_channel_angle)
print ('horizontal channel angle :' ,col_last_left.horizontal_channel_angle)
colilast = col_last_left.gen_one_col(collimator_Nosupport=True)
pyr_lateral_middle = shapes.pyramid(
thickness='%s *mm' % coll1_height_detector_right,
# height='%s *mm' % (height),
height='%s *mm' % (coll1_length_fr_center),
width='%s *mm' % coll1_width_detector)
pyr_lateral_middle = operations.rotate(pyr_lateral_middle, transversal=1, angle='%s *degree' % (90))
pyr_lateral_left_middle = operations.rotate(pyr_lateral_middle, vertical="1",
angle='%s*deg' % (180 + 180-rotation_angle_for_right_parts-wall_angular_thickness-channel_angular_thickness-0.03))
pyr_lateral_right_middle = operations.rotate(pyr_lateral_middle, vertical="1",
angle='%s*deg' % (180 - (180-rotation_angle_for_right_parts+wall_angular_thickness+channel_angular_thickness+0.15)))
# pyr_lateral_right_last = operations.rotate(pyr_lateral, vertical="1",
# angle='%s*deg' % (180 - (180-rotation_angle_for_right_parts/2.)))
factor = 10
pyr_lateral_last = shapes.pyramid(
thickness='%s *mm' % coll1_height_detector_right,
# height='%s *mm' % (height),
height='%s *mm' % (coll1_length_fr_center+factor),
width='%s *mm' % coll1_width_detector)
wall_angular_thickness_last=2*(np.rad2deg(np.arctan((min_channel_wall_thickness/2.)/(coll1_length_fr_center+10))))
print ('wall angular thickness sudo', wall_angular_thickness_last)
channel_angular_thickness_last=2*(np.rad2deg(np.arctan((minimum_channel_size/2.)/(coll1_length_fr_center+10))))
print ('channel angular thickness sudo', channel_angular_thickness_last)
pyr_lateral_last = operations.rotate(pyr_lateral_last, transversal=1, angle='%s *degree' % (90))
pyr_lateral_left_last = operations.rotate(pyr_lateral_last, vertical="1",
angle='%s*deg' % (180 + 180-rotation_angle_for_right_parts+wall_angular_thickness-channel_angular_thickness+0.5))
# pyr_lateral_right_last = operations.rotate(pyr_lateral_last, vertical="1",
# angle='%s*deg' % (180 - (180-rotation_angle_for_right_parts+(wall_angular_thickness*wall_angular_thickness_last*factor)+channel_angular_thickness-0.12)))
pyr_lateral_right_last = operations.rotate(pyr_lateral_last, vertical="1",
angle='%s*deg' % (180 - (180-rotation_angle_for_right_parts+0.75)))
# pyr_lateral_right_last = operations.rotate(pyr_lateral, vertical="1",
# angle='%s*deg' % (180 - (180-rotation_angle_for_right_parts/2.)))
# both=operations.unite(coli_middle_left, colilast_left)
# both= operations.unite(operations.unite
# (operations.unite(operations.unite(operations.unite(colilast_left, col_first), colilast_right),
# coli_first_right), coli_middle_left), coli_middle_right)
whole= operations.unite(operations.unite(colilast, col_first),
coli_middle)
whole_first_part = col_first
whole_middle_part = coli_middle
whole_last_part = colilast
first_middle = operations.unite (col_first, coli_middle)
middle_last = operations.unite (coli_middle, colilast)
# first_left = operations.subtract(whole_first_part, pyr_lateral_right)
# first_right = operations.subtract(whole_first_part, pyr_lateral_left)
middle_left = operations.subtract(whole_middle_part, pyr_lateral_right_middle)
middle_right = operations.subtract(whole_middle_part, pyr_lateral_left_middle)
last_left = operations.subtract(whole_last_part, pyr_lateral_right_last)
last_right = operations.subtract(whole_last_part, pyr_lateral_left_last)
middle_left_last_left = operations.unite( middle_left, last_left)
middle_right_last_right = operations.unite(middle_right, last_right)
whole_joint = operations.unite(operations.unite(middle_left_last_left, middle_right_last_right), whole_first_part)
whole_last_joint = operations.unite(last_left, last_right)
whole_middle_joint = operations.unite(middle_left, middle_right)
# both=operations.unite(coli_middle, coli2R)
# both=operations.unite(operations.unite(operations.unite(coli2, coli3), coli2R), coli3R)
cell_colli = operations.unite(cell, whole)
file='DAC_colli'
filename='%s.xml'%(file)
outputfile=os.path.join(savepath, filename)
with open (outputfile,'wt') as file_h:
weave(cell_colli,file_h, print_docs = False)
# file='last_right_part_New'
# filename='%s.xml'%(file)
# outputfile=os.path.join(savepath, filename)
# with open (outputfile,'wt') as file_h:
# weave(last_right,file_h, print_docs = False)
# file='whole_last_joint_part_New'
# filename='%s.xml'%(file)
# outputfile=os.path.join(savepath, filename)
# with open (outputfile,'wt') as file_h:
# weave(whole_last_joint,file_h, print_docs = False)
# file='middle_left_part_New'
# filename='%s.xml'%(file)
# outputfile=os.path.join(savepath, filename)
# with open (outputfile,'wt') as file_h:
# weave(middle_left,file_h, print_docs = False)
# file='middle_right_part_New'
# filename='%s.xml'%(file)
# outputfile=os.path.join(savepath, filename)
# with open (outputfile,'wt') as file_h:
# weave(middle_right,file_h, print_docs = False)
# file='whole_middle_joint_part_New'
# filename='%s.xml'%(file)
# outputfile=os.path.join(savepath, filename)
# with open (outputfile,'wt') as file_h:
# weave(whole_middle_joint,file_h, print_docs = False)
p = SCADGen.Parser.Parser(outputfile)
p.createSCAD()
test = p.rootelems[0]
cadFile_name='%s.scad'%(file)
cad_file_path=os.path.abspath(os.path.join(savepath, cadFile_name))
cad_file_path
!vglrun openscad {cad_file_path}
```
#
|
github_jupyter
|
```
import pandas as pd
import json
import numpy as np
df = pd.read_csv('data/2015-16_gems_jade.csv')
# df = df[df['country'] == "Myanmar"]
df.head()
```
## Clean data
```
df_gemsjade = df
df_gemsjade.rename(columns={'company_name': 'Company_name_cl'}, inplace=True)
df_gemsjade['type'] = 'entity'
df_gemsjade['target_type'] = ''
df_gemsjade.head()
append_dict_others = [{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Myanmar Gems Enterprise',
'name_of_revenue_stream': 'Production Royalties', 'value_reported': 7627853015 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Myanmar Gems Enterprise',
'name_of_revenue_stream': 'Sale Split', 'value_reported': 122565309 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Myanmar Gems Enterprise',
'name_of_revenue_stream': 'Sales Royalties', 'value_reported': 1593381858 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Myanmar Gems Enterprise',
'name_of_revenue_stream': 'Service Fees', 'value_reported': 682734625 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Myanmar Gems Enterprise',
'name_of_revenue_stream': 'Permit Fees', 'value_reported': 89494475808 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Myanmar Gems Enterprise',
'name_of_revenue_stream': 'Incentive Fees', 'value_reported': 213344561 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Myanmar Gems Enterprise',
'name_of_revenue_stream': 'Other significant payments', 'value_reported': 2637093845 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Myanmar Gems Enterprise',
'name_of_revenue_stream': 'Emporium Fees / Sale Fees', 'value_reported': 2987472820 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Myanmar Gems Enterprise',
'name_of_revenue_stream': 'Sale Split', 'value_reported': 5272471367 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Myanmar Gems Enterprise',
'name_of_revenue_stream': 'Commercial Tax', 'value_reported': 41353324562 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Customs Department',
'name_of_revenue_stream': 'Customs Duties', 'value_reported': 2985923707 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Internal Revenue Department',
'name_of_revenue_stream': 'Commercial Tax', 'value_reported': 244080055733 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Internal Revenue Department',
'name_of_revenue_stream': 'Royalties', 'value_reported': 76123095911 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Internal Revenue Department',
'name_of_revenue_stream': 'Income Tax', 'value_reported': 4377920086 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Internal Revenue Department',
'name_of_revenue_stream': 'Withholding Tax', 'value_reported': 51281036 },
{'Company_name_cl': 'Companies not in EITI Reconciliation', 'type': 'entity',
'paid_to': 'Internal Revenue Department',
'name_of_revenue_stream': 'Capital Gains Tax', 'value_reported': 33708244 }
]
others_df = pd.DataFrame(append_dict_others)
#others_df['total_payments'] = others_df['value_reported']
df_gemsjade = pd.concat([df_gemsjade, others_df])
df_gemsjade
df_gemsjade['name_of_revenue_stream'] = df_gemsjade['name_of_revenue_stream'].replace({'Other significant payments (> 50,000 USD)': 'Other significant payments (> 50,000 USD)'})
company_totals = df_gemsjade.pivot_table(index=['Company_name_cl'], aggfunc='sum')['value_reported']
company_totals = company_totals.to_frame()
company_totals.rename(columns={'value_reported': 'total_payments'}, inplace=True)
company_totals.reset_index(level=0, inplace=True)
company_totals.sort_values(by=['total_payments'], ascending = False, inplace=True)
company_totals
df_gemsjade = pd.merge(df_gemsjade, company_totals, on='Company_name_cl')
```
## Remove negative payments for Sankey
```
df_gemsjade = df_gemsjade[df_gemsjade["value_reported"] > 0]
df_gemsjade = df_gemsjade.sort_values(by=['total_payments'], ascending=False)
df_gemsjade.drop(['Unnamed: 0'], axis=1)
df_gemsjade
df_gemsjade_summary = df_gemsjade[df_gemsjade['Company_name_cl'] != 'Companies not in EITI Reconciliation']
df_gemsjade_summary = df_gemsjade_summary.groupby(['name_of_revenue_stream','paid_to','target_type','type']).sum().reset_index()
df_gemsjade_summary['Company_name_cl'] = 'Companies in EITI Reconciliation'
df_gemsjade_summary = df_gemsjade[df_gemsjade['Company_name_cl'] == 'Companies not in EITI Reconciliation'] \
.append(df_gemsjade_summary)
df_gemsjade_summary
```
## Prepare Source-Target-Value dataframe
```
links_companies = pd.DataFrame(columns=['source','target','value','type'])
to_append = df_gemsjade.groupby(['name_of_revenue_stream','paid_to'],as_index=False)['type','value_reported','total_payments'].sum()
#to_append["target"] = "Myanmar Gems Enterprise"
to_append.rename(columns = {'name_of_revenue_stream':'source', 'value_reported' : 'value', 'paid_to': 'target'}, inplace = True)
to_append = to_append.sort_values(by=['value'], ascending = False)
to_append['target_type'] = 'entity'
links_companies = pd.concat([links_companies,to_append])
print(to_append['value'].sum())
links_companies
## Page 239 of 2015-16 Report. Appendix 8: SOEs reconciliation sheets
append_dict_transfers = [{'source': 'Myanmar Gems Enterprise', 'type': 'entity',
'target': 'Corporate Income Tax (Inter-Government)', 'value': 53788313000 },
{'source': 'Myanmar Gems Enterprise', 'type': 'entity',
'target': 'Commercial Tax (Inter-Government)', 'value': 15000000 },
{'source': 'Myanmar Gems Enterprise', 'type': 'entity',
'target': 'Production Royalties (Inter-Government)', 'value': 17249087176 },
{'source': 'Myanmar Gems Enterprise', 'type': 'entity',
'target': 'State Contribution (Inter-Government)', 'value': 46833942000 },
{'source': 'Corporate Income Tax (Inter-Government)', 'target_type': 'entity',
'target': 'Internal Revenue Department', 'value': 53788313000 },
{'source': 'Commercial Tax (Inter-Government)', 'target_type': 'entity',
'target': 'Internal Revenue Department', 'value': 15000000 },
{'source': 'Production Royalties (Inter-Government)', 'target_type': 'entity',
'target': 'Department of Mines', 'value': 17249087176 },
{'source': 'State Contribution (Inter-Government)', 'target_type': 'entity',
'target': 'Ministry of Planning and Finance', 'value': 46833942000 },
{'source': 'Myanmar Gems Enterprise', 'type': 'entity',
'target': 'Other Accounts', 'value': 107705106000 },
{'source': 'Other Accounts', 'target_type': 'entity',
'target': 'Ministry of Planning and Finance', 'value': 107705106000 },
{'source': 'Internal Revenue Department', 'type': 'entity', 'target_type': 'entity',
'target': 'Ministry of Planning and Finance', 'value': 393194500968 }]
append_dict_transfers_df = pd.DataFrame(append_dict_transfers)
links_summary = pd.concat([links_companies, append_dict_transfers_df])
links_govt = append_dict_transfers_df
#links = pd.concat([links, append_dict_transfers_df])
to_append = df_gemsjade.groupby(['name_of_revenue_stream','Company_name_cl','type'],as_index=False) \
['value_reported','total_payments'] \
.agg({'value_reported':sum,'total_payments':'first'})
to_append.rename(columns = {'Company_name_cl':'source','name_of_revenue_stream':'target', 'value_reported' : 'value'}, inplace = True)
to_append = to_append.sort_values(by=['total_payments'], ascending = False)
links_companies = pd.concat([links_companies,to_append])
print(to_append['value'].sum())
#links
to_append
to_append = df_gemsjade_summary.groupby(['name_of_revenue_stream','Company_name_cl','type'],as_index=False) \
['value_reported','total_payments'] \
.agg({'value_reported':sum,'total_payments':'first'})
to_append.rename(columns = {'Company_name_cl':'source','name_of_revenue_stream':'target', 'value_reported' : 'value'}, inplace = True)
to_append = to_append.sort_values(by=['total_payments'], ascending = False)
links_summary = pd.concat([links_summary,to_append])
links_summary
def prep_nodes_links(links):
unique_source = links['source'].unique()
unique_targets = links['target'].unique()
unique_source = pd.merge(pd.DataFrame(unique_source), links, left_on=0, right_on='source', how='left')
unique_source = unique_source.filter([0,'type'])
unique_targets = pd.merge(pd.DataFrame(unique_targets), links, left_on=0, right_on='target', how='left')
unique_targets = unique_targets.filter([0,'target_type'])
unique_targets.rename(columns = {'target_type':'type'}, inplace = True)
unique_list = pd.concat([unique_source[0], unique_targets[0]]).unique()
unique_list = pd.merge(pd.DataFrame(unique_list), \
pd.concat([unique_source, unique_targets]), left_on=0, right_on=0, how='left')
unique_list.drop_duplicates(subset=0, keep='first', inplace=True)
replace_dict = {k: v for v, k in enumerate(unique_list[0])}
unique_list
return [unique_list,replace_dict]
#unique_list = pd.concat([links['source'], links['target']]).unique()
#replace_dict = {k: v for v, k in enumerate(unique_list)}
[unique_list_summary,replace_dict_summary] = prep_nodes_links(links_summary)
[unique_list_companies,replace_dict_companies] = prep_nodes_links(links_companies)
[unique_list_govt,replace_dict_govt] = prep_nodes_links(links_govt)
def write_nodes_links(filename,unique_list,replace_dict,links):
links_replaced = links.replace({"source": replace_dict,"target": replace_dict})
nodes = pd.DataFrame(unique_list)
nodes.rename(columns = {0:'name'}, inplace = True)
nodes_json= pd.DataFrame(nodes).to_json(orient='records')
links_json= pd.DataFrame(links_replaced).to_json(orient='records')
data = { 'links' : json.loads(links_json), 'nodes' : json.loads(nodes_json) }
data_json = json.dumps(data)
data_json = data_json.replace("\\","")
#print(data_json)
#with open('sankey_data.json', 'w') as outfile:
# json.dump(data_json, outfile)
text_file = open(filename + ".json", "w")
text_file.write(data_json)
text_file.close()
write_nodes_links("sankey_data_2015-16_summary",unique_list_summary,replace_dict_summary,links_summary)
write_nodes_links("sankey_data_2015-16_companies",unique_list_companies,replace_dict_companies,links_companies)
write_nodes_links("sankey_data_2015-16_govt",unique_list_govt,replace_dict_govt,links_govt)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/olgaminguett/ET5003_SEM1_2021-2/blob/main/Week-3/ET5003_Lab_Piecewise_Regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<div>
<img src="https://drive.google.com/uc?export=view&id=1vK33e_EqaHgBHcbRV_m38hx6IkG0blK_" width="350"/>
</div>
#**Artificial Intelligence - MSc**
##ET5003 - MACHINE LEARNING APPLICATIONS
###Instructor: Enrique Naredo
###ET5003_Lab_Piecewise_Regression
# INTRODUCTION
**Piecewise regression**, extract from [Wikipedia](https://en.wikipedia.org/wiki/Segmented_regression):
Segmented regression, also known as piecewise regression or broken-stick regression, is a method in regression analysis in which the independent variable is partitioned into intervals and a separate line segment is fit to each interval.
* Segmented regression analysis can also be performed on
multivariate data by partitioning the various independent variables.
* Segmented regression is useful when the independent variables, clustered into different groups, exhibit different relationships between the variables in these regions.
* The boundaries between the segments are breakpoints.
* Segmented linear regression is segmented regression whereby the relations in the intervals are obtained by linear regression.
## Imports
```
# Suppressing Warnings:
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import numpy as np
import pymc3 as pm
import arviz as az
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
# to plot
import matplotlib.colors
from mpl_toolkits.mplot3d import Axes3D
# to generate classification, regression and clustering datasets
import sklearn.datasets as dt
# to create data frames
from pandas import DataFrame
# to generate data from an existing dataset
from sklearn.neighbors import KernelDensity
from sklearn.model_selection import GridSearchCV
# Define the seed so that results can be reproduced
seed = 11
rand_state = 11
# Define the color maps for plots
color_map = plt.cm.get_cmap('RdYlBu')
color_map_discrete = matplotlib.colors.LinearSegmentedColormap.from_list("", ["red","cyan","magenta","blue"])
```
# DATASET
## Synthetic Dataset
**Synthetic data** plays a very important role in data science, allows us to test a new algorithm under controlled conditions, we can generate data that tests a very specific property or behavior of our algorithm.
* We can test its performance on balanced vs. imbalanced datasets.
* We can evaluate its performance under different noise levels.
* We can establish a baseline of our algorithm's performance under various scenarios.
Real data may be hard or expensive to acquire, or it may have too few data-points.
Another reason is privacy, where real data cannot be revealed to others.
### Synthetic Data for Regression
The sklearn.datasets package has functions for generating synthetic datasets for regression.
The make_regression() function returns a set of input data points (regressors) along with their output (target).
This function can be adjusted with the following parameters:
n_features - number of dimensions/features of the generated data
noise - standard deviation of gaussian noise
n_samples - number of samples
* The response variable is a linear combination of the generated input set.
* A response variable is something that's dependent on other variables.
* In this particular case, it is a target feature that we're trying to predict using all the other input features.
### Example
```
## Example
# data with just 2 features
X1,y1 = dt.make_regression(n_samples=1000, n_features=2,
noise=50, random_state=rand_state,effective_rank=1)
scatter_plot2 = plt.scatter(X1[:,0], X1[:,1], c=y1,
vmin=min(y1), vmax=max(y1),
s=35, cmap=color_map)
```
###Create Synthetic Data
```
# create random datw with 4 clusters
from sklearn.datasets import make_classification
num_samples = 5000
X2, y2 = make_classification(n_classes=4, n_features=2, n_samples=num_samples,
n_redundant=0, n_informative=2, n_clusters_per_class=1)
# create a data frame
df = DataFrame(dict(x=X2[:,0], y=X2[:,1], label=y2))
# three classes
colors = {0:'red', 1:'green', 2:'blue', 3:'brown'}
# figure
fig, ax = plt.subplots()
grouped = df.groupby('label')
# scatter plot
for key, group in grouped:
group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key])
# show the plot
plt.show()
## Main dataset with 4 features
# we'll use this dataset
X3,y3 = dt.make_regression(n_samples=num_samples, n_features=4, n_informative=4,
noise=.2, random_state=rand_state)
# intervals to scale features
scaleX0 = (50, 60)
scaleX1 = (4, 7)
scaleX2 = (1, 2)
scaleX3 = (450, 710)
scaleX4 = (30, 5000)
scaleX5 = (10, 800)
scaleY = (150000, 2000000)
# Scale features
f0 = np.interp(X2[:,0], (X2[:,0].min(), X2[:,0].max()), scaleX0)
f1 = np.interp(X2[:,1], (X2[:,1].min(), X2[:,1].max()), scaleX1)
f2 = np.interp(X3[:,0], (X3[:,0].min(), X3[:,0].max()), scaleX2)
f3 = np.interp(X3[:,1], (X3[:,1].min(), X3[:,1].max()), scaleX3)
f4 = np.interp(X3[:,2], (X3[:,2].min(), X3[:,2].max()), scaleX4)
f5 = np.interp(X3[:,3], (X3[:,3].min(), X3[:,3].max()), scaleX5)
# scaled data
X = np.stack((f0,f1,f2,f3,f4,f5), axis=1)
y = np.interp(y3, (y3.min(), y3.max()), scaleY)
```
## Training & Test Data
```
# split data into training and test
from sklearn.model_selection import train_test_split
# training: 70% (0.7), test: 30% (0.3)
X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=0.3)
def replace_with_nan(df,frac):
"""Replace some values randomly with nan"""
# requires numpy & pandas
rows = np.random.choice(range(df.shape[0]), int(df.shape[0]*frac), replace=False)
cols = np.random.choice(range(0,df.shape[1]-1), size=len(rows), replace=True)
to_repl = [np.nan for i, col in zip(rows, cols)]
# method used to cast a pandas object to a specified dtype
rnan = df.astype(object).to_numpy()
rnan[rows, cols] = to_repl
# returns data frame with nans
return DataFrame(rnan, index=df.index, columns=df.columns)
```
### Train dataset
```
## create train data frame
# use meaningful names
dftrain = DataFrame(dict(feature_1=X_train[:,0],
feature_2=X_train[:,1],
feature_3=X_train[:,2],
feature_4=X_train[:,3],
feature_5=X_train[:,4],
feature_6=X_train[:,5],
cost=y_train))
# dftrain with nans
dftrain = replace_with_nan(dftrain,.10)
print('Number of nan in train dataset: ',dftrain.isnull().sum().sum())
# show first data frame rows
dftrain.head()
# Generate descriptive statistics
dftrain.describe()
```
### Test dataset
```
## create test data frame
# no cost included
dftest = DataFrame(dict(feature_1=X_test[:,0],
feature_2=X_test[:,1],
feature_3=X_test[:,2],
feature_4=X_test[:,3],
feature_5=X_test[:,4],
feature_6=X_test[:,5]))
# dftrain with nans
dftest = replace_with_nan(dftest,.10)
print('Number of nan in test dataset: ',dftest.isnull().sum().sum())
# show first data frame rows
dftest.head()
# Generate descriptive statistics
dftest.describe()
```
### Expected Cost dataset
```
## create expected cost data frame
# the cost is in another file
dfcost = DataFrame(dict(cost=y_test))
# show first data frame rows
dfcost.head()
# Generate descriptive statistics
dfcost.describe()
```
## Save dataset
You can save your datataset in any location suitable for you, one choice using Colab is to save it in your Google Drive this way you will have it handy for your experiments.
```
# Mount Google drive
from google.colab import drive
drive.mount('/content/drive')
# path to your Google Drive
pathDrive = '/content/drive/My Drive/Colab Notebooks/'
# create a directory (if not exist)
import os
folderName = 'SyntData/'
syntPath = pathDrive+folderName
if os.path.exists(syntPath):
print(folderName,' directory already exists')
else:
# create a directory
os.mkdir(syntPath)
print('Now you have a new directory: ',folderName)
# manage versions if you like to save more datasets
# syntTrain1.csv, syntTrain2.csv, etc
## save train dataset into your Drive
filename1 = 'syntTrain.csv'
dftrain.to_csv(syntPath+filename1, encoding='utf-8', index=False)
# save test dataset into your Drive
filename2 = 'syntTest.csv'
dftest.to_csv(syntPath+filename2, encoding='utf-8', index=False)
# save cost dataset into your Drive
filename3 = 'syntCost.csv'
dfcost.to_csv(syntPath+filename3, encoding='utf-8', index=False)
```
## Load your dataset
```
# training dataset:
training_file = syntPath+filename1
# test dataset:
testing_file = syntPath+filename2
# cost dataset:
cost_file = syntPath+filename3
# load train dataset
df_train = pd.read_csv(training_file)
df_train.head()
# load test dataset
df_test = pd.read_csv(testing_file)
df_test.head()
# load cost dataset
df_cost = pd.read_csv(cost_file)
df_cost.head()
```
# PIECEWISE REGRESSION
## Full Model
```
# select some features columns just for the baseline model
# assume not all of the features are informative or useful
# in this exercise you could try all of them
featrain = ['feature_1','feature_2','feature_3','cost']
# dropna: remove missing values
df_subset_train = dftrain[featrain].dropna(axis=0)
featest = ['feature_1','feature_2','feature_3']
df_subset_test = dftest[featest].dropna(axis=0)
# cost
df_cost = df_cost[df_cost.index.isin(df_subset_test.index)]
print('Number of nan in df_subset_train dataset: ',df_subset_train.isnull().sum().sum())
print('Number of nan in df_subset_test dataset: ',df_subset_test.isnull().sum().sum())
# train set, input columns
Xs_train = df_subset_train.iloc[:,0:-1].values
# train set, output column, cost
ys_train = df_subset_train.iloc[:,-1].values.reshape(-1,1)
# test set, input columns
Xs_test = df_subset_test.iloc[:,0:].values
# test set, output column, cost
y_test = df_cost.cost.values
# StandardScaler() will normalize the features i.e. each column of X,
# so, each column/feature/variable will have μ = 0 and σ = 1
sc = StandardScaler()
Xss_train = np.hstack([Xs_train,Xs_train[:,[2]]**2])
xscaler = sc.fit(Xss_train)
Xn_train = xscaler.transform(Xss_train)
Xss_test = np.hstack([Xs_test,Xs_test[:,[2]]**2])
Xn_test = xscaler.transform(Xss_test)
ylog = np.log(ys_train.astype('float'))
yscaler = StandardScaler().fit(ylog)
yn_train = yscaler.transform(ylog)
# model
with pm.Model() as model:
#prior over the parameters of linear regression
alpha = pm.Normal('alpha', mu=0, sigma=30)
#we have one beta for each column of Xn
beta = pm.Normal('beta', mu=0, sigma=30, shape=Xn_train.shape[1])
#prior over the variance of the noise
sigma = pm.HalfCauchy('sigma_n', 5)
#linear regression model in matrix form
mu = alpha + pm.math.dot(beta, Xn_train.T)
#likelihood, be sure that observed is a 1d vector
like = pm.Normal('like', mu=mu, sigma=sigma, observed=yn_train[:,0])
#number of iterations of the algorithms
iter = 50000
# run the model
with model:
approximation = pm.fit(iter,method='advi')
# check the convergence
plt.plot(approximation.hist);
# samples from the posterior
posterior = approximation.sample(5000)
# prediction
ll=np.mean(posterior['alpha']) + np.dot(np.mean(posterior['beta'],axis=0), Xn_test.T)
y_pred_BLR = np.exp(yscaler.inverse_transform(ll.reshape(-1,1)))[:,0]
print("MAE = ",(np.mean(abs(y_pred_BLR - y_test))))
print("MAPE = ",(np.mean(abs(y_pred_BLR - y_test) / y_test)))
```
## Clustering
### Full Model
```
# training gaussian mixture model
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(n_components=4)
# clustering by features 1, 2
ind=[0,1]
X_ind = np.vstack([Xn_train[:,ind],Xn_test[:,ind]])
# Gaussian Mixture
gmm.fit(X_ind)
# plot blue dots
plt.scatter(X_ind[:,0],X_ind[:,1])
# centroids: orange dots
plt.scatter(gmm.means_[:,0],gmm.means_[:,1])
np.max(ys_train)
```
### Clusters
```
# train clusters
clusters_train = gmm.predict(Xn_train[:,ind])
unique_train, counts_train = np.unique(clusters_train, return_counts=True)
dict(zip(unique_train, counts_train))
# test clusters
clusters_test = gmm.predict(Xn_test[:,ind])
unique_test, counts_test = np.unique(clusters_test, return_counts=True)
dict(zip(unique_test, counts_test))
# cluster 0
Xn0 = Xn_train[clusters_train==0,:]
Xtestn0 = Xn_test[clusters_test==0,:]
ylog0 = np.log(ys_train.astype('float')[clusters_train==0,:])
yscaler0 = StandardScaler().fit(ylog0)
yn0 = yscaler0.transform(ylog0)
# cluster 1
Xn1 = Xn_train[clusters_train==1,:]
Xtestn1 = Xn_test[clusters_test==1,:]
ylog1 = np.log(ys_train.astype('float')[clusters_train==1,:])
yscaler1 = StandardScaler().fit(ylog1)
yn1 = yscaler1.transform(ylog1)
# cluster 2
Xn2 = Xn_train[clusters_train==2,:]
Xtestn2 = Xn_test[clusters_test==2,:]
ylog2 = np.log(ys_train.astype('float')[clusters_train==2,:])
yscaler2 = StandardScaler().fit(ylog2)
yn2 = yscaler2.transform(ylog2)
# cluster 3
Xn3 = Xn_train[clusters_train==3,:]
Xtestn3 = Xn_test[clusters_test==3,:]
ylog3 = np.log(ys_train.astype('float')[clusters_train==3,:])
yscaler3 = StandardScaler().fit(ylog3)
yn3 = yscaler3.transform(ylog3)
```
## Piecewise Model
```
# model_0
with pm.Model() as model_0:
# prior over the parameters of linear regression
alpha = pm.Normal('alpha', mu=0, sigma=30)
# we have a beta for each column of Xn0
beta = pm.Normal('beta', mu=0, sigma=30, shape=Xn0.shape[1])
# prior over the variance of the noise
sigma = pm.HalfCauchy('sigma_n', 5)
# linear regression relationship
#linear regression model in matrix form
mu = alpha + pm.math.dot(beta, Xn0.T)
# likelihood, be sure that observed is a 1d vector
like = pm.Normal('like', mu=mu, sigma=sigma, observed=yn0[:,0])
with model_0:
# iterations of the algorithm
approximation = pm.fit(40000,method='advi')
# samples from the posterior
posterior0 = approximation.sample(5000)
# model_1
with pm.Model() as model_1:
# prior over the parameters of linear regression
alpha = pm.Normal('alpha', mu=0, sigma=30)
# we have a beta for each column of Xn
beta = pm.Normal('beta', mu=0, sigma=30, shape=Xn1.shape[1])
# prior over the variance of the noise
sigma = pm.HalfCauchy('sigma_n', 5)
# linear regression relationship
#linear regression model in matrix form
mu = alpha + pm.math.dot(beta, Xn1.T)
# likelihood, #
like = pm.Normal('like', mu=mu, sigma=sigma, observed=yn1[:,0])
with model_1:
# iterations of the algorithm
approximation = pm.fit(40000,method='advi')
# samples from the posterior
posterior1 = approximation.sample(5000)
# model_2
with pm.Model() as model_2:
# prior over the parameters of linear regression
alpha = pm.Normal('alpha', mu=0, sigma=30)
# we have a beta for each column of Xn
beta = pm.Normal('beta', mu=0, sigma=30, shape=Xn2.shape[1])
# prior over the variance of the noise
sigma = pm.HalfCauchy('sigma_n', 5)
# linear regression relationship
# linear regression model in matrix form
mu = alpha + pm.math.dot(beta, Xn2.T)
# likelihood, be sure that observed is a 1d vector
like = pm.Normal('like', mu=mu, sigma=sigma, observed=yn2[:,0])
with model_2:
# iterations of the algorithms
approximation = pm.fit(40000,method='advi')
# samples from the posterior
posterior2 = approximation.sample(5000)
# model_3
with pm.Model() as model3:
# prior over the parameters of linear regression
alpha = pm.Normal('alpha', mu=0, sigma=30)
# we have a beta for each column of Xn
beta = pm.Normal('beta', mu=0, sigma=30, shape=Xn3.shape[1])
# prior over the variance of the noise
sigma = pm.HalfCauchy('sigma_n', 5)
# linear regression relationship
mu = alpha + pm.math.dot(beta, Xn3.T)#linear regression model in matrix form
# likelihood, be sure that observed is a 1d vector
like = pm.Normal('like', mu=mu, sigma=sigma, observed=yn3[:,0])
with model3:
# number of iterations of the algorithms
approximation = pm.fit(40000,method='advi')
# samples from the posterior
posterior3 = approximation.sample(5000)
#############
# Posterior predictive checks (PPCs)
def ppc(alpha,beta,sigma, X, nsamples=500):
#we select nsamples random samples from the posterior
ind = np.random.randint(0,beta.shape[0],size=nsamples)
alphai = alpha[ind]
betai = beta[ind,:]
sigmai = sigma[ind]
Ypred = np.zeros((nsamples,X.shape[0]))
for i in range(X.shape[0]):
#we generate data from linear model
y_pred = alphai + np.dot(betai, X[i:i+1,:].T).T +np.random.randn(len(sigmai))*sigmai
Ypred[:,i]=y_pred[0,:]
return Ypred
```
##Simulations
### Only Cluster 0
```
#Simulation
Ypred0 = yscaler0.inverse_transform(ppc(posterior0['alpha'],posterior0['beta'],posterior0['sigma_n'],Xn0, nsamples=200))
for i in range(Ypred0.shape[0]):
az.plot_dist( Ypred0[i,:],color='r',plot_kwargs={"linewidth": 0.2})
az.plot_dist(Ypred0[i,:],color='r',plot_kwargs={"linewidth": 0.2}, label="prediction")
#plt.plot(np.linspace(-8,8,100),norm.pdf(np.linspace(-8,8,100),df=np.mean(posterior_1['nu'])))
#plt.xlim([0,10e7])
az.plot_dist(ylog0,label='true observations');
plt.legend()
plt.xlabel("log(y) - output variable")
plt.ylabel("density plot");
```
### Only Cluster 1
```
#Simulation
Ypred1 = yscaler1.inverse_transform(ppc(posterior1['alpha'],posterior1['beta'],posterior1['sigma_n'],Xn1, nsamples=200))
for i in range(Ypred1.shape[0]):
az.plot_dist( Ypred1[i,:],color='r',plot_kwargs={"linewidth": 0.2})
az.plot_dist(Ypred1[i,:],color='r',plot_kwargs={"linewidth": 0.2}, label="prediction")
#plt.plot(np.linspace(-8,8,100),norm.pdf(np.linspace(-8,8,100),df=np.mean(posterior_1['nu'])))
#plt.xlim([0,10e7])
az.plot_dist(ylog1,label='true observations');
plt.legend()
plt.xlabel("log(y) - output variable")
plt.ylabel("density plot");
```
### Only Cluster 2
```
#Simulation
Ypred2 = yscaler2.inverse_transform(ppc(posterior2['alpha'],posterior2['beta'],posterior2['sigma_n'],Xn2, nsamples=200))
for i in range(Ypred2.shape[0]):
az.plot_dist( Ypred2[i,:],color='r',plot_kwargs={"linewidth": 0.2})
az.plot_dist(Ypred2[i,:],color='r',plot_kwargs={"linewidth": 0.2}, label="prediction")
#plt.plot(np.linspace(-8,8,100),norm.pdf(np.linspace(-8,8,100),df=np.mean(posterior_1['nu'])))
#plt.xlim([0,10e7])
az.plot_dist(ylog2,label='true observations');
plt.legend()
plt.xlabel("log(y) - output variable")
plt.ylabel("density plot");
```
### Only Cluster 3
```
#Simulation
Ypred3 = yscaler3.inverse_transform(ppc(posterior3['alpha'],posterior3['beta'],posterior3['sigma_n'],Xn3, nsamples=200))
for i in range(Ypred3.shape[0]):
az.plot_dist( Ypred3[i,:],color='r',plot_kwargs={"linewidth": 0.2})
az.plot_dist(Ypred3[i,:],color='r',plot_kwargs={"linewidth": 0.2}, label="prediction")
#plt.plot(np.linspace(-8,8,100),norm.pdf(np.linspace(-8,8,100),df=np.mean(posterior_1['nu'])))
#plt.xlim([0,10e7])
az.plot_dist(ylog3,label='true observations');
plt.legend()
plt.xlabel("log(y) - output variable")
plt.ylabel("density plot");
```
## Overall
```
# posteriors
Ypred0 = ppc(posterior0['alpha'],posterior0['beta'],posterior0['sigma_n'],Xn0, nsamples=200)
Ypred1 = ppc(posterior1['alpha'],posterior1['beta'],posterior1['sigma_n'],Xn1, nsamples=200)
Ypred2 = ppc(posterior2['alpha'],posterior2['beta'],posterior2['sigma_n'],Xn2, nsamples=200)
Ypred3 = ppc(posterior3['alpha'],posterior3['beta'],posterior3['sigma_n'],Xn3, nsamples=200)
# simulation
Ypred = np.hstack([ yscaler0.inverse_transform(Ypred0),
yscaler1.inverse_transform(Ypred1),
yscaler2.inverse_transform(Ypred2),
yscaler3.inverse_transform(Ypred3)])
# prediction
for i in range(Ypred.shape[0]):
az.plot_dist( Ypred[i,:],color='r',plot_kwargs={"linewidth": 0.2})
# plot
az.plot_dist(Ypred[i,:],color='r',plot_kwargs={"linewidth": 0.2}, label="prediction")
ylog=np.vstack([ylog0,ylog1,ylog2,ylog3])
az.plot_dist(ylog,label='true observations');
plt.legend()
plt.xlabel("log(y) - output variable")
plt.ylabel("density plot");
```
## Test set performance
```
# cluster 0
y_pred_BLR0 = np.exp(yscaler0.inverse_transform(np.mean(posterior0['alpha'])
+ np.dot(np.mean(posterior0['beta'],axis=0), Xtestn0.T)))
print("Size Cluster0", np.sum(clusters_test==0), ", MAE Cluster0=",
(np.mean(abs(y_pred_BLR0 - y_test[clusters_test==0]))))
# cluster 1
y_pred_BLR1 = np.exp(yscaler1.inverse_transform(np.mean(posterior1['alpha'])
+ np.dot(np.mean(posterior1['beta'],axis=0), Xtestn1.T)))
print("Size Cluster1", np.sum(clusters_test==1), ", MAE Cluster1=",
(np.mean(abs(y_pred_BLR1 - y_test[clusters_test==1]))))
# cluster 2
y_pred_BLR2 = np.exp(yscaler2.inverse_transform(np.mean(posterior2['alpha'])
+ np.dot(np.mean(posterior2['beta'],axis=0), Xtestn2.T)))
print("Size Cluster2", np.sum(clusters_test==2), ", MAE Cluster2=",
(np.mean(abs(y_pred_BLR2 - y_test[clusters_test==2]))))
# cluster 3
y_pred_BLR3 = np.exp(yscaler3.inverse_transform(np.mean(posterior3['alpha'])
+ np.dot(np.mean(posterior3['beta'],axis=0), Xtestn3.T)))
print("Size Cluster3", np.sum(clusters_test==3), ", MAE Cluster3=",
(np.mean(abs(y_pred_BLR3 - y_test[clusters_test==3]))))
# joint
joint=np.hstack([abs(y_pred_BLR0 - y_test[clusters_test==0]),
abs(y_pred_BLR1 - y_test[clusters_test==1]),
abs(y_pred_BLR2 - y_test[clusters_test==2]),
abs(y_pred_BLR3 - y_test[clusters_test==3])])
# MAE
print("MAE=",np.mean(joint))
```
### PPC on the Test set
```
## Posterior predictive checks (PPCs)
num_samples2 = 200
Ypred0 = ppc(posterior0['alpha'],posterior0['beta'],posterior0['sigma_n'],Xtestn0, nsamples=num_samples2)
Ypred1 = ppc(posterior1['alpha'],posterior1['beta'],posterior1['sigma_n'],Xtestn1, nsamples=num_samples2)
Ypred2 = ppc(posterior2['alpha'],posterior2['beta'],posterior2['sigma_n'],Xtestn2, nsamples=num_samples2)
Ypred3 = ppc(posterior3['alpha'],posterior3['beta'],posterior3['sigma_n'],Xtestn3, nsamples=num_samples2)
# Stack arrays in sequence horizontally (column wise)
Ypred = np.hstack([yscaler0.inverse_transform(Ypred0),
yscaler1.inverse_transform(Ypred1),
yscaler2.inverse_transform(Ypred2),
yscaler3.inverse_transform(Ypred3)])
# plot prediction shape
for i in range(Ypred.shape[0]):
az.plot_dist( Ypred[i,:],color='r',plot_kwargs={"linewidth": 0.2})
# label
az.plot_dist(Ypred[i,:],color='r',plot_kwargs={"linewidth": 0.2}, label="prediction")
# true observations
az.plot_dist(np.log(y_test),label='true observations');
plt.legend()
plt.xlabel("log(y) - output variable")
plt.ylabel("density plot");
```
|
github_jupyter
|
```
!pip install qucumber
import numpy as np
import matplotlib.pyplot as plt
from qucumber.nn_states import PositiveWaveFunction
from qucumber.callbacks import MetricEvaluator
import qucumber.utils.training_statistics as ts
import qucumber.utils.data as data
import qucumber
```
# Caso Rydberg
```
train_path = "Rydberg_data.txt"
train_data = data.load_data(train_path)[0]
nv = train_data.shape[-1]
nh = nv
nn_state = PositiveWaveFunction(num_visible=nv, num_hidden=nh, gpu=False)
epochs = 400
pbs = 100
nbs = pbs
lr = 0.01
k = 10
def psi_coefficient0(nn_state, space, A, **kwargs):
norm = nn_state.compute_normalization(space).sqrt_()
return A * nn_state.psi(space)[0][0] / norm
period = 10
space = nn_state.generate_hilbert_space()
callbacks = [
MetricEvaluator(
period,
{"A_Ψrbm_0": psi_coefficient0},
verbose=True,
space=space,
A=1.0,
)
]
nn_state.fit(
train_data,
epochs=epochs,
pos_batch_size=pbs,
neg_batch_size=nbs,
lr=lr,
k=k,
callbacks=callbacks,
time=True,
)
coeffs = callbacks[0]["A_Ψrbm_0"]
epoch = np.arange(period, epochs + 1, period)
# Plotting
plt.plot(epoch, coeffs, "o", color="C2", markeredgecolor="black")
plt.tight_layout()
plt.show()
```
# Caso Ejemplo
```
#psi_path = "R_1.2_psi.txt"
train_path = "R_1.2_samples.txt"
#train_data, true_psi = data.load_data(train_path, psi_path)
train_data = data.load_data(train_path)[0]
nv = train_data.shape[-1]
nh = nv
nn_state = PositiveWaveFunction(num_visible=nv, num_hidden=nh, gpu=False)
epochs = 400
pbs = 100
nbs = pbs
lr = 0.01
k = 10
def psi_coefficient0(nn_state, space, A, **kwargs):
norm = nn_state.compute_normalization(space).sqrt_()
return A * nn_state.psi(space)[0][0] / norm
def psi_coefficient1(nn_state, space, A, **kwargs):
norm = nn_state.compute_normalization(space).sqrt_()
return A * nn_state.psi(space)[0][1] / norm
def psi_coefficient2(nn_state, space, A, **kwargs):
norm = nn_state.compute_normalization(space).sqrt_()
return A * nn_state.psi(space)[0][2] / norm
def psi_coefficient3(nn_state, space, A, **kwargs):
norm = nn_state.compute_normalization(space).sqrt_()
return A * nn_state.psi(space)[0][3] / norm
period = 10
space = nn_state.generate_hilbert_space()
callbacks = [
MetricEvaluator(
period,
{"A_Ψrbm_0": psi_coefficient0},
verbose=True,
space=space,
A=1.0,
)
]
nn_state.fit(
train_data,
epochs=epochs,
pos_batch_size=pbs,
neg_batch_size=nbs,
lr=lr,
k=k,
callbacks=callbacks,
time=True,
)
coeffs = callbacks[0]["A_Ψrbm_0"]
epoch = np.arange(period, epochs + 1, period)
# Plotting
plt.plot(epoch, coeffs, "o", color="C2", markeredgecolor="black")
plt.tight_layout()
plt.show()
callbacks = [
MetricEvaluator(
period,
{"A_Ψrbm_1": psi_coefficient1},
verbose=True,
space=space,
A=1.0,
)
]
nn_state.fit(
train_data,
epochs=epochs,
pos_batch_size=pbs,
neg_batch_size=nbs,
lr=lr,
k=k,
callbacks=callbacks,
time=True,
)
coeffs = callbacks[0]["A_Ψrbm_1"]
epoch = np.arange(period, epochs + 1, period)
# Plotting
plt.plot(epoch, coeffs, "o", color="C2", markeredgecolor="black")
plt.tight_layout()
plt.show()
callbacks = [
MetricEvaluator(
period,
{"A_Ψrbm_2": psi_coefficient2},
verbose=True,
space=space,
A=1.0,
)
]
nn_state.fit(
train_data,
epochs=epochs,
pos_batch_size=pbs,
neg_batch_size=nbs,
lr=lr,
k=k,
callbacks=callbacks,
time=True,
)
coeffs = callbacks[0]["A_Ψrbm_2"]
epoch = np.arange(period, epochs + 1, period)
# Plotting
plt.plot(epoch, coeffs, "o", color="C2", markeredgecolor="black")
plt.tight_layout()
plt.show()
callbacks = [
MetricEvaluator(
period,
{"A_Ψrbm_3": psi_coefficient3},
verbose=True,
space=space,
A=1.0,
)
]
nn_state.fit(
train_data,
epochs=epochs,
pos_batch_size=pbs,
neg_batch_size=nbs,
lr=lr,
k=k,
callbacks=callbacks,
time=True,
)
coeffs = callbacks[0]["A_Ψrbm_3"]
epoch = np.arange(period, epochs + 1, period)
# Plotting
plt.plot(epoch, coeffs, "o", color="C2", markeredgecolor="black")
plt.tight_layout()
plt.show()
```
# De acá en adelante no sirve porque no calculamos la fidelidad ni el KL
```
# Note that the key given to the *MetricEvaluator* must be
# what comes after callbacks[0].
fidelities = callbacks[0].Fidelity
# Alternatively, we can use the usual dictionary/list subsripting
# syntax. This is useful in cases where the name of the
# metric contains special characters or spaces.
KLs = callbacks[0]["KL"]
coeffs = callbacks[0]["A_Ψrbm_3"]
epoch = np.arange(period, epochs + 1, period)
# Plotting
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(14, 3))
ax = axs[0]
ax.plot(epoch, fidelities, "o", color="C0", markeredgecolor="black")
ax.set_ylabel(r"Fidelity")
ax.set_xlabel(r"Epoch")
ax = axs[1]
ax.plot(epoch, KLs, "o", color="C1", markeredgecolor="black")
ax.set_ylabel(r"KL Divergence")
ax.set_xlabel(r"Epoch")
ax = axs[2]
ax.plot(epoch, coeffs, "o", color="C2", markeredgecolor="black")
ax.set_ylabel(r"$A\psi_{RBM}[0]$")
ax.set_xlabel(r"Epoch")
plt.tight_layout()
plt.show()
```
|
github_jupyter
|
# SAT Analysis
**We wish to answer the question whether SAT is a fairt test?**
## Read in the data
```
import pandas as pd
import numpy as np
import re
data_files = [
"ap_2010.csv",
"class_size.csv",
"demographics.csv",
"graduation.csv",
"hs_directory.csv",
"sat_results.csv"
]
data = {}
for file in data_files:
df = pd.read_csv("schools/{0}".format(file))
data[file.replace(".csv", "")] = df
```
# Read in the surveys
```
all_survey = pd.read_csv("schools/survey_all.txt", delimiter="\t", encoding='windows-1252')
d75_survey = pd.read_csv("schools/survey_d75.txt", delimiter="\t", encoding='windows-1252')
survey = pd.concat([all_survey, d75_survey], axis=0)
survey["DBN"] = survey["dbn"]
survey_fields = [
"DBN",
"rr_s",
"rr_t",
"rr_p",
"N_s",
"N_t",
"N_p",
"saf_p_11",
"com_p_11",
"eng_p_11",
"aca_p_11",
"saf_t_11",
"com_t_11",
"eng_t_11",
"aca_t_11",
"saf_s_11",
"com_s_11",
"eng_s_11",
"aca_s_11",
"saf_tot_11",
"com_tot_11",
"eng_tot_11",
"aca_tot_11",
]
survey = survey[survey_fields]
data["survey"] = survey
```
# Add DBN columns
```
data["hs_directory"]["DBN"] = data["hs_directory"]["dbn"]
def pad_csd(num):
str_rep = str(num)
if len(str_rep) > 1:
return str_rep
else:
return "0" + str_rep
data["class_size"]["padded_csd"] = data["class_size"]["CSD"].apply(pad_csd)
data["class_size"]["DBN"] = data["class_size"]["padded_csd"] + data["class_size"]["SCHOOL CODE"]
```
# Convert columns to numeric
```
cols = ['SAT Math Avg. Score', 'SAT Critical Reading Avg. Score', 'SAT Writing Avg. Score']
for c in cols:
data["sat_results"][c] = pd.to_numeric(data["sat_results"][c], errors="coerce")
data['sat_results']['sat_score'] = data['sat_results'][cols[0]] + data['sat_results'][cols[1]] + data['sat_results'][cols[2]]
def find_lat(loc):
coords = re.findall("\(.+, .+\)", loc)
lat = coords[0].split(",")[0].replace("(", "")
return lat
def find_lon(loc):
coords = re.findall("\(.+, .+\)", loc)
lon = coords[0].split(",")[1].replace(")", "").strip()
return lon
data["hs_directory"]["lat"] = data["hs_directory"]["Location 1"].apply(find_lat)
data["hs_directory"]["lon"] = data["hs_directory"]["Location 1"].apply(find_lon)
data["hs_directory"]["lat"] = pd.to_numeric(data["hs_directory"]["lat"], errors="coerce")
data["hs_directory"]["lon"] = pd.to_numeric(data["hs_directory"]["lon"], errors="coerce")
```
# Condense datasets
Condensing the datasets to remove any two rows having same **DBN** so that all the datasets can be easily joined on **"DBN"**
```
class_size = data["class_size"]
class_size = class_size[class_size["GRADE "] == "09-12"]
class_size = class_size[class_size["PROGRAM TYPE"] == "GEN ED"]
class_size = class_size.groupby("DBN").agg(np.mean)
class_size.reset_index(inplace=True)
data["class_size"] = class_size
data["demographics"] = data["demographics"][data["demographics"]["schoolyear"] == 20112012]
data["graduation"] = data["graduation"][data["graduation"]["Cohort"] == "2006"]
data["graduation"] = data["graduation"][data["graduation"]["Demographic"] == "Total Cohort"]
```
# Convert AP scores to numeric
```
cols = ['AP Test Takers ', 'Total Exams Taken', 'Number of Exams with scores 3 4 or 5']
for col in cols:
data["ap_2010"][col] = pd.to_numeric(data["ap_2010"][col], errors="coerce")
```
# Combine the datasets
Merging the dataset on **DBN** column
```
combined = data["sat_results"]
combined = combined.merge(data["ap_2010"], on="DBN", how="left")
combined = combined.merge(data["graduation"], on="DBN", how="left")
to_merge = ["class_size", "demographics", "survey", "hs_directory"]
for m in to_merge:
combined = combined.merge(data[m], on="DBN", how="inner")
combined = combined.fillna(combined.mean())
combined = combined.fillna(0)
```
# Add a school district column for mapping
```
def get_first_two_chars(dbn):
return dbn[0:2]
combined["school_dist"] = combined["DBN"].apply(get_first_two_chars)
```
# Find correlations
```
correlations = combined.corr()
correlations = correlations["sat_score"]
correlations
```
# Plotting survey correlations
```
# Remove DBN since it's a unique identifier, not a useful numerical value for correlation.
survey_fields.remove("DBN")
import matplotlib.pyplot as plt
import seaborn as sns
% matplotlib inline
fig, ax = plt.subplots(figsize = (8,5))
correlations[survey_fields].plot.bar()
plt.show()
```
#### Findings from above plot
There are high correlations between N_s, N_t, N_p and sat_score. Since these columns are correlated with total_enrollment, it makes sense that they would be high.
It is more interesting that rr_s, the student response rate, or the percentage of students that completed the survey, correlates with sat_score. This might make sense because students who are more likely to fill out surveys may be more likely to also be doing well academically.
How students and teachers percieved safety (saf_t_11 and saf_s_11) correlate with sat_score. This make sense, as it's hard to teach or learn in an unsafe environment.
The last interesting correlation is the aca_s_11, which indicates how the student perceives academic standards, correlates with sat_score, but this is not true for aca_t_11, how teachers perceive academic standards, or aca_p_11, how parents perceive academic standards.
## Investigating safety scores
```
combined.plot.scatter(x = "saf_s_11", y = "sat_score" )
plt.show()
```
There appears to be a correlation between SAT scores and safety, although it isn't thatstrong. It looks like there are a few schools with extremely high SAT scores and high safety scores. There are a few schools with low safety scores and low SAT scores. No school with a safety score lower than 6.5 has an average SAT score higher than 1500 or so.
## Plotting safety scores for districts in NYC
```
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
districts = combined.groupby("school_dist").agg(np.mean)
districts.reset_index(inplace=True)
m = Basemap(
projection='merc',
llcrnrlat=40.496044,
urcrnrlat=40.915256,
llcrnrlon=-74.255735,
urcrnrlon=-73.700272,
resolution='i'
)
m.drawmapboundary(fill_color='#85A6D9')
m.drawcoastlines(color='#6D5F47', linewidth=.4)
m.drawrivers(color='#6D5F47', linewidth=.4)
m.fillcontinents(color='#FFC58C',lake_color='#85A6D9')
longitudes = districts["lon"].tolist()
latitudes = districts["lat"].tolist()
m.scatter(longitudes, latitudes, s=50, zorder=2, latlon=True, c=districts["saf_s_11"], cmap="summer")
plt.show()
```
## Investigating racial differences
```
race_cols = ["white_per", "asian_per", "black_per", "hispanic_per"]
correlations[race_cols].plot.bar()
```
It shows higher percentage of white or asian students at a school correlates positively with sat score, whereas a higher percentage of black or hispanic students correlates negatively with sat score. This may be due to a lack of funding for schools in certain areas, which are more likely to have a higher percentage of black or hispanic students.
### Hispanic people vs SAT score
```
combined.plot.scatter(x = "hispanic_per", y = "sat_score")
plt.show()
bool_hispanic_95 = combined["hispanic_per"] > 95
combined[bool_hispanic_95]["SCHOOL NAME"]
```
The schools listed above appear to primarily be geared towards recent immigrants to the US. These schools have a lot of students who are learning English, which would explain the lower SAT scores.
```
bool_hispanic_10 = (combined["hispanic_per"] < 10) & (combined["sat_score"] > 1800)
combined[bool_hispanic_10]["SCHOOL NAME"]
```
Many of the schools above appear to be specialized science and technology schools that receive extra funding, and only admit students who pass an entrance exam. This doesn't explain the low hispanic_per, but it does explain why their students tend to do better on the SAT -- they are students from all over New York City who did well on a standardized test.
## Investigating gender differences
```
gender_cols = ["male_per", "female_per"]
correlations[gender_cols].plot.bar()
plt.show()
```
In the plot above, we can see that a high percentage of females at a school positively correlates with SAT score, whereas a high percentage of males at a school negatively correlates with SAT score. Neither correlation is extremely strong.
```
combined.plot.scatter(x = "female_per", y = "sat_score")
```
Based on the scatterplot, there doesn't seem to be any real correlation between sat_score and female_per. However, there is a cluster of schools with a high percentage of females (60 to 80), and high SAT scores.
```
bool_female = (combined["female_per"] > 60) & (combined["sat_score"] > 1700)
combined[bool_female]["SCHOOL NAME"]
```
These schools appears to be very selective liberal arts schools that have high academic standards.
## AP_test takers vs SAT
In the U.S., high school students take Advanced Placement (AP) exams to earn college credit. There are AP exams for many different subjects.
```
combined["ap_per"] = combined["AP Test Takers "]/ combined["total_enrollment"]
combined.plot.scatter(x = "ap_per", y = "sat_score")
```
It looks like there is a relationship between the percentage of students in a school who take the AP exam, and their average SAT scores. It's not an extremely strong correlation, though.
## potential next steps:
* Determing whether there's a correlation between class size and SAT scores
* Figuring out which neighborhoods have the best schools
* If we combine this information with a dataset containing property values, we could find the least expensive neighborhoods that have good schools.
* Investigating the differences between parent, teacher, and student responses to surveys.
* Assigning scores to schools based on sat_score and other attributes.
|
github_jupyter
|
## Recipe Builder Actions Overview
### Saving a File Cell
If you wish to save the contents of a cell, simply run it. The `%%writefile` command at the top of the cell will write the contents of the cell to the file named at the top of the cell. You should run the cells manually when applicable. However, **pressing any of the actions at the top will automatically run all file cells relevant to the action**.
### Training and Scoring
Press the associated buttons at the top in order to run training or scoring. The training output will be shown below the `pipeline.py` cell and scoring output will be shown below the `datasaver.py` cell. You must run training at least once before you can run scoring. You may delete the output cell(s). Running training the first time or after changing `requirements.txt` will be slower since the dependencies for the recipe need to be installed, but subsequent runs will be signigicantly faster.
### Creating the Recipe
When you are done editing the recipe and satisfied with the training/scoring output, you can create a recipe from the notebook by pressing `Create Recipe`. After pressing it, you will see a spinner that will spin until the recipe creation has finished. If the recipe creation is successful the spinner will be replaced by an external link that you can click to navigate to the created recipe.
## Caution!
* **Do not delete any of the file cells**
* **Do not edit the `%%writefile` line at the top of the file cells**
* **Do not refresh the JupyterLab page while the recipe is being created**
</br>
</br>
---
#### **Requirements file** (Optional)
Add additional libraries you wish to use in the recipe to the cell below. You can specify the version number if necessary. The file cell below is a **commented out example**.
```
# pandas=0.22.0
# numpy
```
Search here for additional libraries https://anaconda.org/. This is the list of main **libraries already in use**: </br>
`python=3.5.2` `scikit-learn` `pandas` `numpy` `data_access_sdk_python` </br>
**Warning: libraries or specific versions you add may be incompatible with the above libraries**.
#### **Configuration file**
Specify the dataset(s) you wish to use for training/scoring and add hyperparameters. To find the dataset ids go to the **Data tab** in Adobe Experience Platform.
```
{
"trainingDataSetId": "5c927f59012c9615168ba7ec",
"scoringDataSetId": "5c927f59012c9615168ba7ec",
"scoringResultsDataSetId":"5c927bd95a5f721515516861",
"ACP_DSW_TRAINING_XDM_SCHEMA":"https://ns.adobe.com/platformlab05/schemas/a48b2c12ba042a002ffeb75d11ada4c3",
"ACP_DSW_SCORING_RESULTS_XDM_SCHEMA":"https://ns.adobe.com/platformlab05/schemas/bd7afab62bf2588fe14b6cb15cb6ee0d",
"num_recommendations": "5",
"sampling_fraction": "0.5"
}
```
**The following configuration parameters are automatically set for you when you train/score:** </br>
`ML_FRAMEWORK_IMS_USER_CLIENT_ID` `ML_FRAMEWORK_IMS_TOKEN` `ML_FRAMEWORK_IMS_ML_TOKEN` `ML_FRAMEWORK_IMS_TENANT_ID` `saveData`
---
#### **Evaluator file**
Fill in how you wish to evaluate your trained recipe and how your training data should be split. You can also use this file to load and prepare the training data.
```
from ml.runtime.python.Interfaces.AbstractEvaluator import AbstractEvaluator
from data_access_sdk_python.reader import DataSetReader
import numpy as np
import pandas as pd
class Evaluator(AbstractEvaluator):
def __init__(self):
print("Initiate")
self.user_id_column = '_platformlab05.userId'
self.recommendations_column = '_platformlab05.recommendations'
self.item_id_column = '_platformlab05.itemId'
def evaluate(self, data=[], model={}, configProperties={}):
print ("Evaluation evaluate triggered")
# remove columns having none
data = data[data[self.item_id_column].notnull()]
data_grouped_by_user = data.groupby(self.user_id_column).agg(
{self.item_id_column: lambda x: '#'.join(x)})\
.rename(columns={self.item_id_column:'interactions'}).reset_index()
data_recommendations = model.predict(data)
merged_df = pd.merge(data_grouped_by_user, data_recommendations, on=[self.user_id_column]).reset_index()
def compute_recall(row):
set_interactions = set(row['interactions'].split('#'))
set_recommendations = set(row[self.recommendations_column].split('#'))
inters = set_interactions.intersection(set_recommendations)
if len(inters) > 0:
return 1
return 0
def compute_precision(row):
set_interactions = set(row['interactions'].split('#'))
list_recommendations = row[self.recommendations_column].split('#')
score = 0
weight = 0.5
for rec in list_recommendations:
if rec in set_interactions:
score = score + weight
weight = weight / 2
return score
merged_df['recall'] = merged_df.apply(lambda row: compute_recall(row), axis=1)
merged_df['precision'] = merged_df.apply(lambda row: compute_precision(row), axis=1)
recall = merged_df['recall'].mean()
precision = merged_df['precision'].mean()
metric = [{"name": "Recall", "value": recall, "valueType": "double"},
{"name": "Precision", "value": precision, "valueType": "double"}]
print(metric)
return metric
def split(self, configProperties={}):
#########################################
# Load Data
#########################################
prodreader = DataSetReader(client_id=configProperties['ML_FRAMEWORK_IMS_USER_CLIENT_ID'],
user_token=configProperties['ML_FRAMEWORK_IMS_TOKEN'],
service_token=configProperties['ML_FRAMEWORK_IMS_ML_TOKEN'])
df = prodreader.load(data_set_id=configProperties['trainingDataSetId'],
ims_org=configProperties['ML_FRAMEWORK_IMS_TENANT_ID'])
train = df[:]
test = df[:]
return train, test
```
---
#### **Training Data Loader file**
Call your Evaluator split here and/or use this file to load and prepare the training data.
```
import numpy as np
import pandas as pd
from data_access_sdk_python.reader import DataSetReader
from recipe.evaluator import Evaluator
def load(configProperties):
print("Training Data Load Start")
evaluator = Evaluator()
(train_data, _) = evaluator.split(configProperties)
print("Training Data Load Finish")
return train_data
```
---
#### **Scoring Data Loader file**
Use this file to load and prepare your scoring data.
```
import numpy as np
import pandas as pd
from data_access_sdk_python.reader import DataSetReader
def load(configProperties):
print("Scoring Data Load Start")
#########################################
# Load Data
#########################################
prodreader = DataSetReader(client_id=configProperties['ML_FRAMEWORK_IMS_USER_CLIENT_ID'],
user_token=configProperties['ML_FRAMEWORK_IMS_TOKEN'],
service_token=configProperties['ML_FRAMEWORK_IMS_ML_TOKEN'])
df = prodreader.load(data_set_id=configProperties['scoringDataSetId'],
ims_org=configProperties['ML_FRAMEWORK_IMS_TENANT_ID'])
print("Scoring Data Load Finish")
return df
```
---
#### **Pipeline file**
Fill in the training and scoring functions for your recipe. Training output will be added below this file cell.
```
import pandas as pd
import numpy as np
from collections import Counter
class PopularityBasedRecommendationModel():
def __init__(self, num_to_recommend):
self.num_to_recommend = num_to_recommend
self.recommendations = ['dummy']
self.user_id_column = '_platformlab05.userId'
self.recommendations_column = '_platformlab05.recommendations'
self.item_id_column = '_platformlab05.itemId'
def fit(self, df):
df = df[df[self.item_id_column].notnull()]
self.recommendations = [item for item, freq in
Counter(list(df[self.item_id_column].values)).most_common(self.num_to_recommend)]
def predict(self, df):
# remove columns having none
df = df[df[self.item_id_column].notnull()]
df_grouped_by_user = df.groupby(self.user_id_column).agg(
{self.item_id_column: lambda x: ','.join(x)})\
.rename(columns={self.item_id_column:'interactions'}).reset_index()
df_grouped_by_user[self.recommendations_column] = '#'.join(self.recommendations)
df_grouped_by_user = df_grouped_by_user.drop(['interactions'],axis=1)
return df_grouped_by_user
def train(configProperties, data):
print("Train Start")
#########################################
# Extract fields from configProperties
#########################################
num_recommendations = int(configProperties['num_recommendations'])
#########################################
# Fit model
#########################################
model = PopularityBasedRecommendationModel(num_recommendations)
model.fit(data)
print("Train Complete")
return model
def score(configProperties, data, model):
print("Score Start")
result = model.predict(data)
print("Score Complete")
return result
```
---
#### **Data Saver file**
Add how you wish to save your scored data. **By default saveData=False, since saving data is not relevant when scoring from the notebook.** Scoring output will be added below this cell.
```
from data_access_sdk_python.writer import DataSetWriter
from functools import reduce
import json
def save(configProperties, prediction):
print(prediction)
prodwriter = DataSetWriter(client_id=configProperties['ML_FRAMEWORK_IMS_USER_CLIENT_ID'],
user_token=configProperties['ML_FRAMEWORK_IMS_TOKEN'],
service_token=configProperties['ML_FRAMEWORK_IMS_ML_TOKEN'])
batch_id = prodwriter.write(data_set_id=configProperties['scoringResultsDataSetId'],
dataframe=prediction,
ims_org=configProperties['ML_FRAMEWORK_IMS_TENANT_ID'])
print("Data written successfully to platform:",batch_id)
```
|
github_jupyter
|
# <center>Python Project : intercative pricing app based on Bokeh</center>
#### Ines BIDAL, Noam AFLALO
```
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import numpy as np
import dask.array as da
import matplotlib.pyplot as plt
import bokeh
import math
import scipy.stats as stats
from ipywidgets import interact, interactive, fixed, interact_manual
from scipy.sparse import diags
from mpl_toolkits.mplot3d import Axes3D
from scipy.stats import norm
import numpy.linalg as lng
import ipywidgets as widgets
from bokeh.io import push_notebook, show, output_notebook
from bokeh.layouts import row
from bokeh.plotting import figure, output_file, reset_output
from bokeh.core.properties import Instance, String
from bokeh.models import ColumnDataSource, LayoutDOM
from bokeh.util.compiler import TypeScript
# See how the memory is working
from dask.distributed import Client, progress
client = Client(processes=False, threads_per_worker=4,
n_workers=1, memory_limit='2e9')
client
```
# Part I : Graphical representation using Bokeh of the accuracy of the Euler Implicit Scheme to price European options.
## A/ Pricing using Black-Scholes (Closed form formula)
```
#Black and Scholes formula for European put
def P_BS(s, K, T):
d1 = math.log(s / (K * math.exp(-r*T))) / (sigma * math.sqrt(T)) + sigma * math.sqrt(T) / 2
d2 = math.log(s / (K * math.exp(- r * T))) / (sigma * math.sqrt(T)) - sigma * math.sqrt(T) / 2
Nd1 = stats.norm.cdf(- d1, loc=0, scale=1)
Nd2 = stats.norm.cdf(- d2, loc=0, scale=1)
P = math.exp(- r * T) * K * Nd2 - s * Nd1
return P
#Black and Scholes formula for European call
def C_BS(s, K, T):
d1 = math.log(s/(K* math.exp(-r*T))) / (sigma * math.sqrt(T)) + sigma * math.sqrt(T) / 2
d2 = math.log(s/(K* math.exp(-r*T))) / (sigma * math.sqrt(T)) - sigma * math.sqrt(T) / 2
Nd1 = stats.norm.cdf(d1, loc=0, scale=1)
Nd2 = stats.norm.cdf(d2, loc=0, scale=1)
C = -math.exp(r*T) * K * Nd2 + s * Nd1
return C
```
## B/ Pricing using the Implicit Euler Scheme (approximation)
```
# initialisation
S=100
K = 100
r = 0.01
T = 1
sigma = 0.2
# NUMERICAL PARAMETERS
N=10
I=10
S_min=0; S_max=200;
h = (S_max - S_min) / (I + 1)
dt = T / N
Sval = 80
# payoff function
def u0(s, K=K) :
return np.maximum(0, K - s)
def u_left(t, S_min=S_min) :
return K * np.exp(-r * t) - S_min
def u_right(t) :
return 0
def u_call(s, K):
return np.maximum(0, s - K)
def IE(I,N, K, S0, sigma=sigma, T=T):
# NUMERICAL PARAMETERS
S_min = max(0, S0 - 100)
S_max = S0 + 100
Sval = 0.8 * S0
dt = T / N
h = (S_max - S_min) / I
#vectors
S = np.array([(S_min + j * h) for j in range(1,I+1)])
tn = np.array([n * dt for n in range(N+1)])
alpha = ((sigma ** 2)*(S ** 2)/(2 * (h ** 2)))
beta = (r / (2 * h)) * S
# q(t)
def q(t):
y = np.zeros((I,1))
y[0] = ( - alpha[1] + beta[1]) * u_left(t, S_min)
y[-1] = ( - alpha[-1] - beta[-1]) * u_right(t)
return y
# A
array = list(- alpha[:- 2] - beta[:- 2])
array.append(0)
array = np.array(array)
matToTridiag = np.array([- alpha[1::] + beta[1::],2 * alpha + r,array])
offset = [- 1,0,1]
A = diags(matToTridiag,offset).toarray()
U = u0(S, K).reshape(I, 1)
for i in range(N):
U = lng.solve(dt * A + np.eye(I), -q(tn[i + 1])*dt +U)
return(U, S)
```
# Graphical comparison between the two methods of pricing using bokeh
```
def plotter_eu_put(I, N, K, S0, sigma=sigma, T=T) :
V = IE(I, N, K, S0, sigma, T)
opts = dict(plot_width=550, plot_height=350, min_border=0,
title='European Put option')
p = figure(**opts)
r1 = p.line(V[1], V[0].ravel(), line_width=2, legend_label='Euler Scheme')
r2 = p.line(V[1], [u0(i, K) for i in V[1]], line_width=2,
color='red', legend_label='Payoff')
r3 = p.line(V[1], [P_BS(i, K, T) for i in V[1]], line_width=1,
color='black', legend_label='Black & Scholes',
line_dash='4 4')
p.xaxis.axis_label = 'S'
p.yaxis.axis_label = 'Price'
try :
reset_output()
output_notebook()
except :
try :
output_notebook()
except :
pass
t1 = show(p, notebook_handle=True)
def plotter_eu_call(I, N, K, S0, sigma=sigma, T=T):
V = IE(I, N, K, S0, sigma, T)
call = V[0].ravel() + V[1] - K * np.exp(-r*T)
opts = dict(plot_width=550, plot_height=350, min_border=0,
title='European Put option')
p = figure(**opts)
r1 = p.line(V[1], call, line_width=2, legend_label='Euler Scheme')
r2 = p.line(V[1], [u_call(i, K) for i in V[1]], line_width=2,
color='red', legend_label='Payoff')
r3 = p.line(V[1], [C_BS(i, K, T) for i in V[1]], line_width=1,
color='black', legend_label='Black & Scholes',
line_dash='4 4')
p.xaxis.axis_label = 'S'
p.yaxis.axis_label = 'Price'
try :
reset_output()
output_notebook()
except :
try :
output_notebook()
except :
pass
t1 = show(p, notebook_handle=True)
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
def func(Product):
if Product == 'Put' :
w = interact(lambda I, N, K, S0, sig, T : plotter_eu_put( I, N, K, S0, sig, T),
I=widgets.IntSlider(min=10, max=30, step=1, value=20, description='Spot mesh I'),
N=widgets.IntSlider(min=10, max=30, step=1, value=20, description='Time Mesh N' ),
K=widgets.IntSlider(min=10, max=200, step=10, value=100, description='Strike K'),
S0=widgets.IntSlider(min=10, max=200, step=10, value=100, description='Spot S'),
sig=widgets.BoundedFloatText(value=0.2, min=0, max=1.0, step=0.1, description='Volatility:', disabled=False),
T=widgets.BoundedFloatText(value=1, min=0, max=2.0, step=0.1, description='Maturity:', disabled=False))
elif Product == 'Call' :
w = interact(lambda I, N, K, S0, sig, T : plotter_eu_call( I, N, K, S0, sig, T),
I=widgets.IntSlider(min=10, max=30, step=1, value=20, description='Spot mesh I'),
N=widgets.IntSlider(min=10, max=30, step=1, value=20, description='Time Mesh N' ),
K=widgets.IntSlider(min=10, max=200, step=10, value=100, description='Strike K'),
S0=widgets.IntSlider(min=10, max=200, step=10, value=100, description='Spot S'),
sig=widgets.BoundedFloatText(value=0.2, min=0, max=1.0, step=0.1, description='Volatility:', disabled=False),
T=widgets.BoundedFloatText(value=1, min=0, max=2.0, step=0.1, description='Maturity:', disabled=False))
wf = interact(func, Product=['Put', 'Call'])
```
# Part II : Pricing of exotic options using different kinds of Monte Carlo simulation using Dask and Bokeh
```
T = 1
S0 = 100 # Initial price
r = 0.01 # Risk free interest rate
sigma = 0.2 # Volatility
rho = -0.5 # Correlation
kappa = 2.5 # Revert rate
theta = 0.05 # Long-term volatility
xi = 0.04 # Volatility of instantaneous volatility
v0 = 0.2 # Initial instantaneous volatility
Accumulationlevel = 70 # Accumulation Level
B = 110 # Barrier Level
nb_steps = 252 #Nombre de steps
nb_path = 1000 #Nombre de paths
S = np.arange(50,150,252)
t = np.arange(0.1,1)
surface=True
```
## A/ Pricing of an accumulator under constant volatility model (Black-Scholes)
```
# Nous avons rencontré un problème avec Dask : on ne peut pas assigner des parties de array exemple : arr[0] = 1 impossible a faire.
# Une solution est de concatener des bouts de array. Cependant, on perd en rapidité, la version numpy est même préférée.
# Version Dask:
#from tqdm.autonotebook import tqdm
#def Black_Scholes_path(S0, T, r, sigma, nb_steps):
# '''Function that generate B&S path with constant volatility sigma'''
# S = da.from_array(np.array([S0]), chunks=(1,))
# dt = T / nb_steps
# dask_arrays = [S]
# for t in range(1, nb_steps):
# eps = np.random.standard_normal(1) # pseudorandom numbers
# # arr = da.from_array(np.array([dask_arrays[-1] * math.exp((r-0.5*sigma**2 ) * dt + sigma * eps * math.sqrt(dt))]),
# # chunks=1)
# arr = dask_arrays[-1] * math.exp((r-0.5*sigma**2 ) * dt + sigma * eps * math.sqrt(dt))
# dask_arrays.append(arr)
# return da.concatenate(dask_arrays, axis=0).compute()
#Version Numpy:
def Black_Scholes_path(S0, T, r, sigma, nb_steps):
'''Function that generate B&S path with constant volatility sigma'''
S = np.zeros(nb_steps)
S[0] = S0
dt = T / nb_steps
for t in range(1, nb_steps):
eps = np.random.standard_normal(1) # pseudorandom numbers
S[t] = S[t - 1] * np.exp((r - 0.5 * sigma**2 ) * dt + sigma * eps * np.sqrt(dt))
return S
```
## B/ Pricing of an accumulator under non constant volatility model (Heston model)
```
# Nous avons rencontré un problème avec Dask : on ne peut pas assigner des parties de array exemple : arr[0] = 1 impossible a faire.
# Une solution est de concatener des bouts de array. Cependant, on perd en rapidité, la version numpy est même préférée.
# Version Dask:
#def HeMC (S0, r, v0, rho, kappa, theta, xi, T, nb_steps):
# '''Generate a Monte Carlo simulation for the Heston model'''
#
# # Generate random Brownian Motion
# R = np.array([0, 0])
# COV = np.matrix([[1, rho], [rho, 1]])
# W = np.random.multivariate_normal(R, COV, nb_steps)
# W = da.from_array(W, chunks=(2, 10))
# W_S = W[:, 0]
# W_v = W[:, 1]
# dt = T / nb_steps
# # Generate paths
# arr = np.zeros((nb_steps, 1))
# arr[:, 0] = v0
# v = [da.from_array(arr, chunks=1)]
# S = [da.from_array(np.array([S0]), chunks=1)]
#
# for t in range(1,nb_steps):
# v_tmp = v[-1] + kappa * (theta - v[-1]) * dt + xi * np.sqrt(v[-1]) * np.sqrt(dt) * W_v[t]
# v.append(v_tmp)
#
# S_tmp = S[-1] * np.exp((r - 0.5 * v[-1][0]) * dt + np.sqrt(v[-1][0] * dt) * W_S[t])
# S.append(S_tmp)
#
# S = da.concatenate(S, axis=0).compute()
# vol = da.concatenate(v, axis=1).compute()
#
# return S, vol
# Version Numpy:
def HeMC (S0, r, v0, rho, kappa, theta, xi, T, nb_steps):
'''Generate a Monte Carlo simulation for the Heston model'''
# Generate random Brownian Motion
R = np.array([0, 0])
COV = np.matrix([[1, rho], [rho, 1]])
W = np.random.multivariate_normal(R, COV, nb_steps)
W_S = W[:, 0]
W_v = W[:, 1]
dt = T / nb_steps
# Generate paths
v = np.zeros(nb_steps)
v[0] = v0
S = np.zeros(nb_steps)
S[0] = S0
expiry_dates = np.zeros(nb_steps)
strikes = np.zeros(nb_steps)
strikes[0] = S0
vol = np.zeros((nb_steps,nb_steps))
vol[:, 0] = v0
for t in range(1,nb_steps):
expiry_dates[t] = t * dt
v[t] = v[t-1] + kappa * (theta - v[t-1]) * dt + xi * np.sqrt(v[t-1]) * np.sqrt(dt) * W_v[t]
vol[:, t] = v[t]
S[t] = S[t-1] * np.exp((r - 0.5 * v[t-1]) * dt + np.sqrt(v[t-1] * dt) * W_S[t])
strikes = S[t]
return S, vol
def Accumulator(S0, kappa, theta, xi, Accumulationlevel,
sigma, B, Type, Model, T=T,
nb_path=nb_path, nb_steps=nb_steps, r=r): ## ca calcule le prix d'un accu, on precise si on veut priceer avec barrier continue ou discret + si on veut faire avec vol constante ou stochastique.
currentPayoff = np.zeros(nb_path)
if Type == "Continuous":
shift=np.exp(0.5826 * sigma * np.sqrt(T/nb_steps))
B = B / shift
for i in range(nb_path):
if Model == "BS":
S = Black_Scholes_path(S0, T, r, sigma, nb_steps)
else:
S= HeMC (S0, r, v0, rho, kappa, theta, xi, T, nb_steps)[0]
if (i == 0 and surface):
plotter_heston(S0, kappa, theta, xi, v0, rho, nb_steps,
r, T) # Si on fait un pricing avec vol stochastique alors on affiche aussi une surface de volatilite utilise.
for t in range(1, nb_steps):
if S[t] > B:
break
elif (S[t] < B) & (S[t] > Accumulationlevel):
currentPayoff[i] = currentPayoff[i] + (S[t]- Accumulationlevel)
else:
currentPayoff[i] = currentPayoff[i] + 2 * (S[t]- Accumulationlevel)
print('The price for this configuration is : '+str(np.exp(-r * T) * np.mean(currentPayoff)))
return np.exp(-r * T) * np.mean(currentPayoff)
# Code Java Script
TS_CODE = """
import {LayoutDOM, LayoutDOMView} from "models/layouts/layout_dom"
import {ColumnDataSource} from "models/sources/column_data_source"
import {LayoutItem} from "core/layout"
import * as p from "core/properties"
declare namespace vis {
class Graph3d {
constructor(el: HTMLElement, data: object, OPTIONS: object)
setData(data: vis.DataSet): void
}
class DataSet {
add(data: unknown): void
}
}
const OPTIONS = {
width: '600px',
height: '600px',
style: 'surface',
showPerspective: true,
showGrid: true,
keepAspectRatio: true,
verticalRatio: 1.0,
legendLabel: 'stuff',
cameraPosition: {
horizontal: -0.35,
vertical: 0.22,
distance: 1.8,
},
}
export class Surface3dView extends LayoutDOMView {
model: Surface3d
private _graph: vis.Graph3d
initialize(): void {
super.initialize()
const url = "https://cdnjs.cloudflare.com/ajax/libs/vis/4.16.1/vis.min.js"
const script = document.createElement("script")
script.onload = () => this._init()
script.async = false
script.src = url
document.head.appendChild(script)
}
private _init(): void {
this._graph = new vis.Graph3d(this.el, this.get_data(), OPTIONS)
this.connect(this.model.data_source.change, () => {
this._graph.setData(this.get_data())
})
}
get_data(): vis.DataSet {
const data = new vis.DataSet()
const source = this.model.data_source
for (let i = 0; i < source.get_length()!; i++) {
data.add({
x: source.data[this.model.x][i],
y: source.data[this.model.y][i],
z: source.data[this.model.z][i],
})
}
return data
}
get child_models(): LayoutDOM[] {
return []
}
_update_layout(): void {
this.layout = new LayoutItem()
this.layout.set_sizing(this.box_sizing())
}
}
export namespace Surface3d {
export type Attrs = p.AttrsOf<Props>
export type Props = LayoutDOM.Props & {
x: p.Property<string>
y: p.Property<string>
z: p.Property<string>
data_source: p.Property<ColumnDataSource>
}
}
export interface Surface3d extends Surface3d.Attrs {}
export class Surface3d extends LayoutDOM {
properties: Surface3d.Props
__view_type__: Surface3dView
constructor(attrs?: Partial<Surface3d.Attrs>) {
super(attrs)
}
static __name__ = "Surface3d"
static init_Surface3d() {
this.prototype.default_view = Surface3dView
this.define<Surface3d.Props>({
x: [ p.String ],
y: [ p.String ],
z: [ p.String ],
data_source: [ p.Instance ],
})
}
}
"""
class Surface3d(LayoutDOM):
__implementation__ = TypeScript(TS_CODE)
data_source = Instance(ColumnDataSource)
x = String
y = String
z = String
def plotter_heston(S0, kappa, theta, xi,
v0=0.2, rho=-0.2, nb_steps=252, r=0.1, T=1):
vol = HeMC (S0, r, v0, rho, kappa, theta, xi, T, nb_steps)[1]
ny, nx=vol.shape
x = np.linspace(T, 0, nx)
y = np.linspace(0, 2*S0, ny)
xv, yv = np.meshgrid(x, y)
xv = xv.ravel() * 2e6
yv = yv.ravel() * 1e4
zv = vol.ravel() * 1e4
data = dict(x=xv, y=yv, z=zv)
source = ColumnDataSource(data=data)
surface = Surface3d(x="x", y="y", z="z", data_source=source, width=6000, height=6000)
reset_output()
output_file('foo.html')
show(surface)
w = interact(lambda S, k, theta, xi, Acc, sig, B, Type, Model, T, n_p, n_s : Accumulator(S, k, theta, xi, Acc, sig, B, Type, Model, T, n_p, n_s),
S=widgets.IntSlider(min=0, max=200, step=1, value=100, description='Spot Price:'),
k=widgets.FloatText(value=0.3, description='Revert Rate:', disabled=False),
theta=widgets.FloatText(value=0.2, description='Long Term Vol:', disabled=False),
xi=widgets.FloatText(value=0.2, description='Vol of vol:', disabled=False),
Acc=widgets.IntSlider(min=0, max=200, step=10, value=70, description='Accu Level:'),
sig=widgets.FloatSlider(value=0.2, min=0, max=1.0, step=0.1, description='BS Vol:'),
B=widgets.IntSlider(min=0, max=200, step=10, value=110, description='Barrier:'),
Type=['Continuous', 'Discret'],
Model=['BS', 'Heston'],
T=widgets.BoundedFloatText(value=1, min=0, max=2.0, step=0.1, description='T:', disabled=False),
n_p=widgets.IntText(value=1000, description='Number path:', disabled=False),
n_s=widgets.IntText(value=252, description='Number Steps:', disabled=False))
```
# Conclusion: API summarizing the whole project
```
def func(Product):
if Product == 'Put' :
w = interact(lambda I, N, K, S0, sig, T : plotter_eu_put( I, N, K, S0, sig, T),
I=widgets.IntSlider(min=10, max=30, step=1, value=20, description='Spot mesh I'),
N=widgets.IntSlider(min=10, max=30, step=1, value=20, description='Time Mesh N' ),
K=widgets.IntSlider(min=10, max=200, step=10, value=100, description='Strike K'),
S0=widgets.IntSlider(min=10, max=200, step=10, value=100, description='Spot S'),
sig=widgets.BoundedFloatText(value=0.2, min=0, max=1.0, step=0.1, description='Volatility:', disabled=False),
T=widgets.BoundedFloatText(value=1, min=0, max=2.0, step=0.1, description='Maturity:', disabled=False))
elif Product == 'Call' :
w = interact(lambda I, N, K, S0, sig, T : plotter_eu_call( I, N, K, S0, sig, T),
I=widgets.IntSlider(min=10, max=30, step=1, value=20, description='Spot mesh I'),
N=widgets.IntSlider(min=10, max=30, step=1, value=20, description='Time Mesh N' ),
K=widgets.IntSlider(min=10, max=200, step=10, value=100, description='Strike K'),
S0=widgets.IntSlider(min=10, max=200, step=10, value=100, description='Spot S'),
sig=widgets.BoundedFloatText(value=0.2, min=0, max=1.0, step=0.1, description='Volatility:', disabled=False),
T=widgets.BoundedFloatText(value=1, min=0, max=2.0, step=0.1, description='Maturity:', disabled=False))
elif Product == 'Accumulator' :
w = interact(lambda S, k, theta, xi, Acc, sig, B, Type, Model, T, n_p, n_s : Accumulator(S, k, theta, xi, Acc, sig, B, Type, Model, T, n_p, n_s),
S=widgets.IntSlider(min=0, max=200, step=1, value=100, description='Spot Price:'),
k=widgets.FloatText(value=0.3, description='Revert Rate:', disabled=False),
theta=widgets.FloatText(value=0.2, description='Long Term Vol:', disabled=False),
xi=widgets.FloatText(value=0.2, description='Vol of vol:', disabled=False),
Acc=widgets.IntSlider(min=0, max=200, step=10, value=70, description='Accu Level:'),
sig=widgets.FloatSlider(value=0.2, min=0, max=1.0, step=0.1, description='BS Vol:'),
B=widgets.IntSlider(min=0, max=200, step=10, value=110, description='Barrier:'),
Type=['Continuous', 'Discret'],
Model=['BS', 'Heston'],
T=widgets.BoundedFloatText(value=1, min=0, max=2.0, step=0.1, description='T:', disabled=False),
n_p=widgets.IntText(value=1000, description='Number path:', disabled=False),
n_s=widgets.IntText(value=252, description='Number Steps:', disabled=False))
wf = interact(func, Product=['Put', 'Call', 'Accumulator'])
```
# Testing of the code using the module ipytest
```
import ipytest
ipytest.autoconfig()
%%run_pytest[clean]
def test_vanilla_options():
assert np.round(P_BS(100, 100, 1),0) == np.round(C_BS(100, 100, 1),0) #ATM Call et Put options have the same price.
assert np.abs((C_BS(100, 150, 1)- P_BS(100, 150, 1) - (100 - 150))/(100 - 150))<0.03 # Verification of the put call parity.
def test_comparison_BS_Euler():
#Verifications that the Euler implicit scheme give approximately the same results as the BS closed form formula.
assert np.round(np.abs(IE(20,20, 100, 100, sigma=sigma, T=T)[0][9]- P_BS(100, 100, 1)),0) == 0
Smax= 2 * 100
assert np.round(np.abs(IE(20,20, 100, 100, sigma=sigma, T=T)[0][19]-P_BS(Smax, 100, 1)),0) == 0
assert np.round(np.abs((IE(20,20, 100, 100, sigma=sigma, T=T)[0][0]-P_BS(0.0001, 100, 1))/P_BS(0.0001, 100, 1)),0)<0.01
```
The folowwing tests check the following:
1) All parameters being equal, an accumulator monitored discretely should be more expansive than an accumulator monitored continuously. Indeed, the probability that the barrier is reached is higher when it is monitored continuously. When the barrier level is breached then the consumer does not accumulate anymore while its objective is to buy at premium as much as he can (except in the very uncommon situation where the consumer does not want to buy at premium more than a certain quantity). Hence, it is not in his interest that the knock-out event happens.
2) All parameters being equal, when the accumulation level gets closer to the barrier level then the price of the structure should decreases. Indeed, if the price at which the consumer is going to buy the underlying increases then the value of the accumulator must decrease.
```
%%run_pytest[clean]
def test_accumulator_BS_barrier():
assert Accumulator(S0, kappa, theta, xi, Accumulationlevel,
sigma, B, 'Discretely', 'BS', T=T,
nb_path=nb_path, nb_steps=nb_steps, r=r) > Accumulator(S0, kappa, theta, xi, Accumulationlevel,
sigma, B, 'Continuous', 'BS', T=T,
nb_path=nb_path, nb_steps=nb_steps, r=r)
def test_accumulator_BS_moneyness():
assert Accumulator(S0, kappa, theta, xi, 70,
sigma, B, 'Discretely', 'BS', T=T,
nb_path=nb_path, nb_steps=nb_steps, r=r) > Accumulator(S0, kappa, theta, xi, 80,
sigma, B, 'Continuous', 'BS', T=T,
nb_path=nb_path, nb_steps=nb_steps, r=r)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/NikhilAsogekar3/Deep-generative-models/blob/Nikhil-Asogekar/Flow_based_models_MNIST.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
pip install tensorboardX
!git clone https://github.com/ikostrikov/pytorch-flows.git
cd /content/pytorch-flows
#!python3 main.py --dataset MNIST --epochs 39 --flow realnvp
import matplotlib.pyplot as plt
validation_loss_arr = [1412.0406375, 1370.5079875, 1348.550775, 1334.159575, 1325.254575, 1319.1790625, 1315.10325, 1312.5301, 1310.65305, 1310.687425, 1311.35375, 1311.877675, 1312.796075, 1314.81735, 1317.54745, 1319.8324375, 1321.603, 1324.7897, 1327.7507125, 1331.42705, 1334.2232, 1339.4804875, 1342.9756375, 1346.1300125, 1350.1139375, 1356.0851125, 1360.2053125, 1366.4998875, 1370.7212, 1374.3232875, 1379.8238, 1385.65835, 1389.1418375, 1394.9656875, 1403.6788125, 1401.44405, 1409.9638625, 1413.2237375]
t = range(0, len(validation_loss_arr))
plt.plot(t, validation_loss_arr)
train_loss_arr = [1511.5976767578125, 1360.9772260742188, 1322.0785751953124, 1296.0211049804689, 1276.5144577636718, 1261.229338623047, 1248.6503671875, 1237.9503461914062, 1228.6204182128906, 1220.3964350585939, 1213.1653029785157, 1206.3630102539062, 1200.22780078125, 1194.51880859375, 1189.2079301757813, 1184.2624074707032, 1179.5217976074218, 1175.1067041015624, 1170.7996330566407, 1166.799998046875, 1162.8641767578124, 1159.2805954589844, 1155.6494814453124, 1152.1222729492188, 1148.6916076660157, 1145.3542294921874, 1142.176744140625, 1139.1558515625, 1136.1532312011718, 1133.1632014160157, 1130.3760786132812, 1127.6925244140625, 1124.7973051757813, 1122.0719990234375, 1119.3449682617188, 1117.0235095214844, 1114.2212861328126, 1111.9701120605469]
t = range(0, len(train_loss_arr))
plt.plot(t, train_loss_arr)
plt.plot(t, validation_loss_arr, t, train_loss_arr)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend(["Validation loss", "Train loss"])
!python3 main.py --dataset MNIST --epochs 39 --flow glow
validation_loss_arr = [1484.944525, 1448.5011, 1438.97775, 1435.0574875, 1430.6920875, 1427.9998875, 1425.3092375, 1421.1903375, 1417.8452125, 1415.0766, 1412.69425, 1410.895975, 1409.8048875, 1408.6240375, 1408.8573125, 1407.4946875, 1406.6598125, 1407.115675, 1407.099875, 1407.207725, 1408.56065, 1407.9039625, 1408.49985, 1409.919175, 1410.6583375, 1411.1293125, 1411.93785, 1413.17175, 1414.00035, 1414.5922, 1415.8910125, 1417.012075, 1417.9070625, 1419.245525, 1420.567775, 1421.0723, 1422.3375, 1423.8462625, 1425.2494375]
t = range(0, len(validation_loss_arr))
train_loss_arr = [1592.3340473632813, 1432.2148771972657, 1411.8592380371094, 1404.2368190917969, 1399.0122912597656, 1393.9151166992187, 1388.4402751464843, 1382.2163818359375, 1375.8588979492188, 1369.3731450195312, 1363.0482341308593, 1356.9891979980468, 1351.5233972167969, 1346.2685063476563, 1341.3521879882812, 1336.5260437011718, 1332.3902065429688, 1328.0386740722656, 1324.2451677246095, 1320.651205078125, 1317.3042258300782, 1313.7869914550781, 1310.7244875488282, 1307.6534665527345, 1304.8240341796875, 1302.179833984375, 1299.3900551757813, 1296.8192004394532, 1294.365427734375, 1291.9521706542969, 1289.6495329589843, 1287.4671130371094, 1285.2979084472656, 1283.0090056152344, 1281.0169001464844, 1279.0981711425782, 1277.2146674804687, 1275.240196533203, 1273.138677734375]
t = range(0, len(train_loss_arr))
plt.plot(t, validation_loss_arr, t, train_loss_arr)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend(["Validation loss", "Train loss"])
```
|
github_jupyter
|
# 1. Libraries
```
import numpy as np
import pandas as pd
import tensorflow as tf
import keras.preprocessing.image
import sklearn.preprocessing
import sklearn.model_selection
import sklearn.metrics
import sklearn.linear_model
import sklearn.naive_bayes
import sklearn.tree
import sklearn.ensemble
import os;
import datetime
import cv2
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
%pylab inline
```
# 2. Analyzing data
```
train_df = pd.read_csv('train.csv')
print("Data Shape:", train_df.shape)
print('')
print(train_df.isnull().any().describe())
print('distinct labels ', train_df['label'].unique())
# data are approximately balanced (less often occurs 5, most often 1)
print(train_df['label'].value_counts())
## normalize data and split into training and validation sets
# function to normalize data
def normalize_data(data):
data = data / data.max()
return data
# convert class labels from scalars to one-hot vectors e.g. 1 => [0 1 0 0 0 0 0 0 0 0]
def dense_to_one_hot(labels_dense, num_classes):
num_labels = labels_dense.shape[0]
index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes))
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
# convert one-hot encodings into labels
def one_hot_to_dense(labels_one_hot):
return np.argmax(labels_one_hot,1)
# computet the accuracy of label predictions
def accuracy_from_dense_labels(y_target, y_pred):
y_target = y_target.reshape(-1,)
y_pred = y_pred.reshape(-1,)
return np.mean(y_target == y_pred)
# computet the accuracy of one-hot encoded predictions
def accuracy_from_one_hot_labels(y_target, y_pred):
y_target = one_hot_to_dense(y_target).reshape(-1,)
y_pred = one_hot_to_dense(y_pred).reshape(-1,)
return np.mean(y_target == y_pred)
# extract and normalize images
x_train_valid = train_df.iloc[:,1:].values.reshape(-1,28,28,1) # (42000,28,28,1) array
x_train_valid = x_train_valid.astype(np.float) # convert from int64 to float32
x_train_valid = normalize_data(x_train_valid)
image_width = image_height = 28
image_size = 784
# extract image labels
y_train_valid_labels = train_df.iloc[:,0].values # (42000,1) array
labels_count = np.unique(y_train_valid_labels).shape[0]; # number of different labels = 10
#plot some images and labels
plt.figure(figsize=(15,9))
for i in range(50):
plt.subplot(5,10,1+i)
plt.title(y_train_valid_labels[i])
plt.imshow(x_train_valid[i].reshape(28,28), cmap=cm.binary)
# labels in one hot representation
y_train_valid = dense_to_one_hot(y_train_valid_labels, labels_count).astype(np.uint8)
# dictionaries for saving results
y_valid_pred = {}
y_train_pred = {}
y_test_pred = {}
train_loss, valid_loss = {}, {}
train_acc, valid_acc = {}, {}
print('x_train_valid.shape = ', x_train_valid.shape)
print('y_train_valid_labels.shape = ', y_train_valid_labels.shape)
print('image_size = ', image_size )
print('image_width = ', image_width)
print('image_height = ', image_height)
print('labels_count = ', labels_count)
```
# Data Manipulation
```
#generate new images via rotations, translations and zooming
# generate new images via rotations, translations, zoom using keras
def generate_images(imgs):
# rotations, translations, zoom
image_generator = keras.preprocessing.image.ImageDataGenerator(
rotation_range = 10, width_shift_range = 0.1 , height_shift_range = 0.1,
zoom_range = 0.1)
# get transformed images
imgs = image_generator.flow(imgs.copy(), np.zeros(len(imgs)),
batch_size=len(imgs), shuffle = False).next()
return imgs[0]
# check image generation
fig,axs = plt.subplots(5,10, figsize=(15,9))
for i in range(5):
n = np.random.randint(0,x_train_valid.shape[0]-2)
axs[i,0].imshow(x_train_valid[n:n+1].reshape(28,28),cmap=cm.binary)
axs[i,1].imshow(generate_images(x_train_valid[n:n+1]).reshape(28,28), cmap=cm.binary)
axs[i,2].imshow(generate_images(x_train_valid[n:n+1]).reshape(28,28), cmap=cm.binary)
axs[i,3].imshow(generate_images(x_train_valid[n:n+1]).reshape(28,28), cmap=cm.binary)
axs[i,4].imshow(generate_images(x_train_valid[n:n+1]).reshape(28,28), cmap=cm.binary)
axs[i,5].imshow(generate_images(x_train_valid[n:n+1]).reshape(28,28), cmap=cm.binary)
axs[i,6].imshow(generate_images(x_train_valid[n:n+1]).reshape(28,28), cmap=cm.binary)
axs[i,7].imshow(generate_images(x_train_valid[n:n+1]).reshape(28,28), cmap=cm.binary)
axs[i,8].imshow(generate_images(x_train_valid[n:n+1]).reshape(28,28), cmap=cm.binary)
axs[i,9].imshow(generate_images(x_train_valid[n:n+1]).reshape(28,28), cmap=cm.binary)
```
# 4. Basic Model Generation using Sklearn
```
## First try out some basic sklearn models
logreg = sklearn.linear_model.LogisticRegression(verbose=0, solver='lbfgs',
multi_class='multinomial')
decision_tree = sklearn.tree.DecisionTreeClassifier()
extra_trees = sklearn.ensemble.ExtraTreesClassifier(verbose=0)
gradient_boost = sklearn.ensemble.GradientBoostingClassifier(verbose=0)
random_forest = sklearn.ensemble.RandomForestClassifier(verbose=0)
gaussianNB = sklearn.naive_bayes.GaussianNB()
# store models in dictionary
base_models = {'logreg': logreg, 'extra_trees': extra_trees,
'gradient_boost': gradient_boost, 'random_forest': random_forest,
'decision_tree': decision_tree, 'gaussianNB': gaussianNB}
# choose models for out-of-folds predictions
take_models = ['logreg','random_forest','extra_trees']
for mn in take_models:
train_acc[mn] = []
valid_acc[mn] = []
# cross validations
cv_num = 10 # cross validations default = 20 => 5% validation set
kfold = sklearn.model_selection.KFold(cv_num, shuffle=True, random_state=123)
for i,(train_index, valid_index) in enumerate(kfold.split(x_train_valid)):
# start timer
start = datetime.datetime.now();
# train and validation data of original images
x_train = x_train_valid[train_index].reshape(-1,784)
y_train = y_train_valid[train_index]
x_valid = x_train_valid[valid_index].reshape(-1,784)
y_valid = y_train_valid[valid_index]
for mn in take_models:
# create cloned model from base models
model = sklearn.base.clone(base_models[mn])
model.fit(x_train, one_hot_to_dense(y_train))
# predictions
y_train_pred[mn] = model.predict_proba(x_train)
y_valid_pred[mn] = model.predict_proba(x_valid)
train_acc[mn].append(accuracy_from_one_hot_labels(y_train_pred[mn], y_train))
valid_acc[mn].append(accuracy_from_one_hot_labels(y_valid_pred[mn], y_valid))
print(i,': '+mn+' train/valid accuracy = %.3f/%.3f'%(train_acc[mn][-1],
valid_acc[mn][-1]))
# only one iteration
if False:
break;
print(mn+': averaged train/valid accuracy = %.3f/%.3f'%(np.mean(train_acc[mn]),
np.mean(valid_acc[mn])))
#Comparisons of Accuracies of base models
# boxplot algorithm comparison
fig = plt.figure(figsize=(20,8))
ax = fig.add_subplot(1,2,1)
plt.title('Train accuracy')
plt.boxplot([train_acc[mn] for mn in train_acc.keys()])
ax.set_xticklabels([mn for mn in train_acc.keys()])
ax.set_ylabel('Accuracy');
ax.set_ylim([0.90,1.0])
ax = fig.add_subplot(1,2,2)
plt.title('Valid accuracy')
plt.boxplot([valid_acc[mn] for mn in train_acc.keys()])
ax.set_xticklabels([mn for mn in train_acc.keys()])
ax.set_ylabel('Accuracy');
ax.set_ylim([0.90,1.0])
for mn in train_acc.keys():
print(mn + ' averaged train/valid accuracy = %.3f/%.3f'%(np.mean(train_acc[mn]),
np.mean(valid_acc[mn])))
```
# 5. Neural Network with TensorFlow
```
## build the neural network class
class nn_class:
# class that implements the neural network
# constructor
def __init__(self, nn_name = 'nn_1'):
# tunable hyperparameters for nn architecture
self.s_f_conv1 = 3; # filter size of first convolution layer (default = 3)
self.n_f_conv1 = 36; # number of features of first convolution layer (default = 36)
self.s_f_conv2 = 3; # filter size of second convolution layer (default = 3)
self.n_f_conv2 = 36; # number of features of second convolution layer (default = 36)
self.s_f_conv3 = 3; # filter size of third convolution layer (default = 3)
self.n_f_conv3 = 36; # number of features of third convolution layer (default = 36)
self.n_n_fc1 = 576; # number of neurons of first fully connected layer (default = 576)
# tunable hyperparameters for training
self.mb_size = 50 # mini batch size
self.keep_prob = 0.33 # keeping probability with dropout regularization
self.learn_rate_array = [10*1e-4, 7.5*1e-4, 5*1e-4, 2.5*1e-4, 1*1e-4, 1*1e-4,
1*1e-4,0.75*1e-4, 0.5*1e-4, 0.25*1e-4, 0.1*1e-4,
0.1*1e-4, 0.075*1e-4,0.050*1e-4, 0.025*1e-4, 0.01*1e-4,
0.0075*1e-4, 0.0050*1e-4,0.0025*1e-4,0.001*1e-4]
self.learn_rate_step_size = 3 # in terms of epochs
# parameters
self.learn_rate = self.learn_rate_array[0]
self.learn_rate_pos = 0 # current position pointing to current learning rate
self.index_in_epoch = 0
self.current_epoch = 0
self.log_step = 0.2 # log results in terms of epochs
self.n_log_step = 0 # counting current number of mini batches trained on
self.use_tb_summary = False # True = use tensorboard visualization
self.use_tf_saver = False # True = use saver to save the model
self.nn_name = nn_name # name of the neural network
# permutation array
self.perm_array = np.array([])
# function to get the next mini batch
def next_mini_batch(self):
start = self.index_in_epoch
self.index_in_epoch += self.mb_size
self.current_epoch += self.mb_size/len(self.x_train)
# adapt length of permutation array
if not len(self.perm_array) == len(self.x_train):
self.perm_array = np.arange(len(self.x_train))
# shuffle once at the start of epoch
if start == 0:
np.random.shuffle(self.perm_array)
# at the end of the epoch
if self.index_in_epoch > self.x_train.shape[0]:
np.random.shuffle(self.perm_array) # shuffle data
start = 0 # start next epoch
self.index_in_epoch = self.mb_size # set index to mini batch size
if self.train_on_augmented_data:
# use augmented data for the next epoch
self.x_train_aug = normalize_data(self.generate_images(self.x_train))
self.y_train_aug = self.y_train
end = self.index_in_epoch
if self.train_on_augmented_data:
# use augmented data
x_tr = self.x_train_aug[self.perm_array[start:end]]
y_tr = self.y_train_aug[self.perm_array[start:end]]
else:
# use original data
x_tr = self.x_train[self.perm_array[start:end]]
y_tr = self.y_train[self.perm_array[start:end]]
return x_tr, y_tr
# generate new images via rotations, translations, zoom using keras
def generate_images(self, imgs):
print('generate new set of images')
# rotations, translations, zoom
image_generator = keras.preprocessing.image.ImageDataGenerator(
rotation_range = 10, width_shift_range = 0.1 , height_shift_range = 0.1,
zoom_range = 0.1)
# get transformed images
imgs = image_generator.flow(imgs.copy(), np.zeros(len(imgs)),
batch_size=len(imgs), shuffle = False).next()
return imgs[0]
# weight initialization
def weight_variable(self, shape, name = None):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial, name = name)
# bias initialization
def bias_variable(self, shape, name = None):
initial = tf.constant(0.1, shape=shape) # positive bias
return tf.Variable(initial, name = name)
# 2D convolution
def conv2d(self, x, W, name = None):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME', name = name)
# max pooling
def max_pool_2x2(self, x, name = None):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],
padding='SAME', name = name)
# attach summaries to a tensor for TensorBoard visualization
def summary_variable(self, var, var_name):
with tf.name_scope(var_name):
mean = tf.reduce_mean(var)
stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
tf.summary.scalar('mean', mean)
tf.summary.scalar('stddev', stddev)
tf.summary.scalar('max', tf.reduce_max(var))
tf.summary.scalar('min', tf.reduce_min(var))
tf.summary.histogram('histogram', var)
# function to create the graph
def create_graph(self):
# reset default graph
tf.reset_default_graph()
# variables for input and output
self.x_data_tf = tf.placeholder(dtype=tf.float32, shape=[None,28,28,1],
name='x_data_tf')
self.y_data_tf = tf.placeholder(dtype=tf.float32, shape=[None,10], name='y_data_tf')
# 1.layer: convolution + max pooling
self.W_conv1_tf = self.weight_variable([self.s_f_conv1, self.s_f_conv1, 1,
self.n_f_conv1],
name = 'W_conv1_tf') # (5,5,1,32)
self.b_conv1_tf = self.bias_variable([self.n_f_conv1], name = 'b_conv1_tf') # (32)
self.h_conv1_tf = tf.nn.relu(self.conv2d(self.x_data_tf,
self.W_conv1_tf) + self.b_conv1_tf,
name = 'h_conv1_tf') # (.,28,28,32)
self.h_pool1_tf = self.max_pool_2x2(self.h_conv1_tf,
name = 'h_pool1_tf') # (.,14,14,32)
# 2.layer: convolution + max pooling
self.W_conv2_tf = self.weight_variable([self.s_f_conv2, self.s_f_conv2,
self.n_f_conv1, self.n_f_conv2],
name = 'W_conv2_tf')
self.b_conv2_tf = self.bias_variable([self.n_f_conv2], name = 'b_conv2_tf')
self.h_conv2_tf = tf.nn.relu(self.conv2d(self.h_pool1_tf,
self.W_conv2_tf) + self.b_conv2_tf,
name ='h_conv2_tf') #(.,14,14,32)
self.h_pool2_tf = self.max_pool_2x2(self.h_conv2_tf, name = 'h_pool2_tf') #(.,7,7,32)
# 3.layer: convolution + max pooling
self.W_conv3_tf = self.weight_variable([self.s_f_conv3, self.s_f_conv3,
self.n_f_conv2, self.n_f_conv3],
name = 'W_conv3_tf')
self.b_conv3_tf = self.bias_variable([self.n_f_conv3], name = 'b_conv3_tf')
self.h_conv3_tf = tf.nn.relu(self.conv2d(self.h_pool2_tf,
self.W_conv3_tf) + self.b_conv3_tf,
name = 'h_conv3_tf') #(.,7,7,32)
self.h_pool3_tf = self.max_pool_2x2(self.h_conv3_tf,
name = 'h_pool3_tf') # (.,4,4,32)
# 4.layer: fully connected
self.W_fc1_tf = self.weight_variable([4*4*self.n_f_conv3,self.n_n_fc1],
name = 'W_fc1_tf') # (4*4*32, 1024)
self.b_fc1_tf = self.bias_variable([self.n_n_fc1], name = 'b_fc1_tf') # (1024)
self.h_pool3_flat_tf = tf.reshape(self.h_pool3_tf, [-1,4*4*self.n_f_conv3],
name = 'h_pool3_flat_tf') # (.,1024)
self.h_fc1_tf = tf.nn.relu(tf.matmul(self.h_pool3_flat_tf,
self.W_fc1_tf) + self.b_fc1_tf,
name = 'h_fc1_tf') # (.,1024)
# add dropout
self.keep_prob_tf = tf.placeholder(dtype=tf.float32, name = 'keep_prob_tf')
self.h_fc1_drop_tf = tf.nn.dropout(self.h_fc1_tf, self.keep_prob_tf,
name = 'h_fc1_drop_tf')
# 5.layer: fully connected
self.W_fc2_tf = self.weight_variable([self.n_n_fc1, 10], name = 'W_fc2_tf')
self.b_fc2_tf = self.bias_variable([10], name = 'b_fc2_tf')
self.z_pred_tf = tf.add(tf.matmul(self.h_fc1_drop_tf, self.W_fc2_tf),
self.b_fc2_tf, name = 'z_pred_tf')# => (.,10)
# cost function
self.cross_entropy_tf = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=self.y_data_tf, logits=self.z_pred_tf), name = 'cross_entropy_tf')
# optimisation function
self.learn_rate_tf = tf.placeholder(dtype=tf.float32, name="learn_rate_tf")
self.train_step_tf = tf.train.AdamOptimizer(self.learn_rate_tf).minimize(
self.cross_entropy_tf, name = 'train_step_tf')
# predicted probabilities in one-hot encoding
self.y_pred_proba_tf = tf.nn.softmax(self.z_pred_tf, name='y_pred_proba_tf')
# tensor of correct predictions
self.y_pred_correct_tf = tf.equal(tf.argmax(self.y_pred_proba_tf, 1),
tf.argmax(self.y_data_tf, 1),
name = 'y_pred_correct_tf')
# accuracy
self.accuracy_tf = tf.reduce_mean(tf.cast(self.y_pred_correct_tf, dtype=tf.float32),
name = 'accuracy_tf')
# tensors to save intermediate accuracies and losses during training
self.train_loss_tf = tf.Variable(np.array([]), dtype=tf.float32,
name='train_loss_tf', validate_shape = False)
self.valid_loss_tf = tf.Variable(np.array([]), dtype=tf.float32,
name='valid_loss_tf', validate_shape = False)
self.train_acc_tf = tf.Variable(np.array([]), dtype=tf.float32,
name='train_acc_tf', validate_shape = False)
self.valid_acc_tf = tf.Variable(np.array([]), dtype=tf.float32,
name='valid_acc_tf', validate_shape = False)
# number of weights and biases
num_weights = (self.s_f_conv1**2*self.n_f_conv1
+ self.s_f_conv2**2*self.n_f_conv1*self.n_f_conv2
+ self.s_f_conv3**2*self.n_f_conv2*self.n_f_conv3
+ 4*4*self.n_f_conv3*self.n_n_fc1 + self.n_n_fc1*10)
num_biases = self.n_f_conv1 + self.n_f_conv2 + self.n_f_conv3 + self.n_n_fc1
print('num_weights =', num_weights)
print('num_biases =', num_biases)
return None
def attach_summary(self, sess):
# create summary tensors for tensorboard
self.use_tb_summary = True
self.summary_variable(self.W_conv1_tf, 'W_conv1_tf')
self.summary_variable(self.b_conv1_tf, 'b_conv1_tf')
self.summary_variable(self.W_conv2_tf, 'W_conv2_tf')
self.summary_variable(self.b_conv2_tf, 'b_conv2_tf')
self.summary_variable(self.W_conv3_tf, 'W_conv3_tf')
self.summary_variable(self.b_conv3_tf, 'b_conv3_tf')
self.summary_variable(self.W_fc1_tf, 'W_fc1_tf')
self.summary_variable(self.b_fc1_tf, 'b_fc1_tf')
self.summary_variable(self.W_fc2_tf, 'W_fc2_tf')
self.summary_variable(self.b_fc2_tf, 'b_fc2_tf')
tf.summary.scalar('cross_entropy_tf', self.cross_entropy_tf)
tf.summary.scalar('accuracy_tf', self.accuracy_tf)
# merge all summaries for tensorboard
self.merged = tf.summary.merge_all()
# initialize summary writer
timestamp = datetime.datetime.now().strftime('%d-%m-%Y_%H-%M-%S')
filepath = os.path.join(os.getcwd(), 'logs', (self.nn_name+'_'+timestamp))
self.train_writer = tf.summary.FileWriter(os.path.join(filepath,'train'), sess.graph)
self.valid_writer = tf.summary.FileWriter(os.path.join(filepath,'valid'), sess.graph)
def attach_saver(self):
# initialize tensorflow saver
self.use_tf_saver = True
self.saver_tf = tf.train.Saver()
# function to train the graph
def train_graph(self, sess, x_train, y_train, x_valid, y_valid, n_epoch = 1,
train_on_augmented_data = False):
# train on original or augmented data
self.train_on_augmented_data = train_on_augmented_data
# training and validation data
self.x_train = x_train
self.y_train = y_train
self.x_valid = x_valid
self.y_valid = y_valid
# use augmented data
if self.train_on_augmented_data:
print('generate new set of images')
self.x_train_aug = normalize_data(self.generate_images(self.x_train))
self.y_train_aug = self.y_train
# parameters
mb_per_epoch = self.x_train.shape[0]/self.mb_size
train_loss, train_acc, valid_loss, valid_acc = [],[],[],[]
# start timer
start = datetime.datetime.now();
print(datetime.datetime.now().strftime('%d-%m-%Y %H:%M:%S'),': start training')
print('learnrate = ',self.learn_rate,', n_epoch = ', n_epoch,
', mb_size = ', self.mb_size)
# looping over mini batches
for i in range(int(n_epoch*mb_per_epoch)+1):
# adapt learn_rate
self.learn_rate_pos = int(self.current_epoch // self.learn_rate_step_size)
if not self.learn_rate == self.learn_rate_array[self.learn_rate_pos]:
self.learn_rate = self.learn_rate_array[self.learn_rate_pos]
print(datetime.datetime.now()-start,': set learn rate to %.6f'%self.learn_rate)
# get new batch
x_batch, y_batch = self.next_mini_batch()
# run the graph
sess.run(self.train_step_tf, feed_dict={self.x_data_tf: x_batch,
self.y_data_tf: y_batch,
self.keep_prob_tf: self.keep_prob,
self.learn_rate_tf: self.learn_rate})
# store losses and accuracies
if i%int(self.log_step*mb_per_epoch) == 0 or i == int(n_epoch*mb_per_epoch):
self.n_log_step += 1 # for logging the results
feed_dict_train = {
self.x_data_tf: self.x_train[self.perm_array[:len(self.x_valid)]],
self.y_data_tf: self.y_train[self.perm_array[:len(self.y_valid)]],
self.keep_prob_tf: 1.0}
feed_dict_valid = {self.x_data_tf: self.x_valid,
self.y_data_tf: self.y_valid,
self.keep_prob_tf: 1.0}
# summary for tensorboard
if self.use_tb_summary:
train_summary = sess.run(self.merged, feed_dict = feed_dict_train)
valid_summary = sess.run(self.merged, feed_dict = feed_dict_valid)
self.train_writer.add_summary(train_summary, self.n_log_step)
self.valid_writer.add_summary(valid_summary, self.n_log_step)
train_loss.append(sess.run(self.cross_entropy_tf,
feed_dict = feed_dict_train))
train_acc.append(self.accuracy_tf.eval(session = sess,
feed_dict = feed_dict_train))
valid_loss.append(sess.run(self.cross_entropy_tf,
feed_dict = feed_dict_valid))
valid_acc.append(self.accuracy_tf.eval(session = sess,
feed_dict = feed_dict_valid))
print('%.2f epoch: train/val loss = %.4f/%.4f, train/val acc = %.4f/%.4f'%(
self.current_epoch, train_loss[-1], valid_loss[-1],
train_acc[-1], valid_acc[-1]))
# concatenate losses and accuracies and assign to tensor variables
tl_c = np.concatenate([self.train_loss_tf.eval(session=sess), train_loss], axis = 0)
vl_c = np.concatenate([self.valid_loss_tf.eval(session=sess), valid_loss], axis = 0)
ta_c = np.concatenate([self.train_acc_tf.eval(session=sess), train_acc], axis = 0)
va_c = np.concatenate([self.valid_acc_tf.eval(session=sess), valid_acc], axis = 0)
sess.run(tf.assign(self.train_loss_tf, tl_c, validate_shape = False))
sess.run(tf.assign(self.valid_loss_tf, vl_c , validate_shape = False))
sess.run(tf.assign(self.train_acc_tf, ta_c , validate_shape = False))
sess.run(tf.assign(self.valid_acc_tf, va_c , validate_shape = False))
print('running time for training: ', datetime.datetime.now() - start)
return None
# save tensors/summaries
def save_model(self, sess):
# tf saver
if self.use_tf_saver:
#filepath = os.path.join(os.getcwd(), 'logs' , self.nn_name)
filepath = os.path.join(os.getcwd(), self.nn_name)
self.saver_tf.save(sess, filepath)
# tb summary
if self.use_tb_summary:
self.train_writer.close()
self.valid_writer.close()
return None
# forward prediction of current graph
def forward(self, sess, x_data):
y_pred_proba = self.y_pred_proba_tf.eval(session = sess,
feed_dict = {self.x_data_tf: x_data,
self.keep_prob_tf: 1.0})
return y_pred_proba
# function to load tensors from a saved graph
def load_tensors(self, graph):
# input tensors
self.x_data_tf = graph.get_tensor_by_name("x_data_tf:0")
self.y_data_tf = graph.get_tensor_by_name("y_data_tf:0")
# weights and bias tensors
self.W_conv1_tf = graph.get_tensor_by_name("W_conv1_tf:0")
self.W_conv2_tf = graph.get_tensor_by_name("W_conv2_tf:0")
self.W_conv3_tf = graph.get_tensor_by_name("W_conv3_tf:0")
self.W_fc1_tf = graph.get_tensor_by_name("W_fc1_tf:0")
self.W_fc2_tf = graph.get_tensor_by_name("W_fc2_tf:0")
self.b_conv1_tf = graph.get_tensor_by_name("b_conv1_tf:0")
self.b_conv2_tf = graph.get_tensor_by_name("b_conv2_tf:0")
self.b_conv3_tf = graph.get_tensor_by_name("b_conv3_tf:0")
self.b_fc1_tf = graph.get_tensor_by_name("b_fc1_tf:0")
self.b_fc2_tf = graph.get_tensor_by_name("b_fc2_tf:0")
# activation tensors
self.h_conv1_tf = graph.get_tensor_by_name('h_conv1_tf:0')
self.h_pool1_tf = graph.get_tensor_by_name('h_pool1_tf:0')
self.h_conv2_tf = graph.get_tensor_by_name('h_conv2_tf:0')
self.h_pool2_tf = graph.get_tensor_by_name('h_pool2_tf:0')
self.h_conv3_tf = graph.get_tensor_by_name('h_conv3_tf:0')
self.h_pool3_tf = graph.get_tensor_by_name('h_pool3_tf:0')
self.h_fc1_tf = graph.get_tensor_by_name('h_fc1_tf:0')
self.z_pred_tf = graph.get_tensor_by_name('z_pred_tf:0')
# training and prediction tensors
self.learn_rate_tf = graph.get_tensor_by_name("learn_rate_tf:0")
self.keep_prob_tf = graph.get_tensor_by_name("keep_prob_tf:0")
self.cross_entropy_tf = graph.get_tensor_by_name('cross_entropy_tf:0')
self.train_step_tf = graph.get_operation_by_name('train_step_tf')
self.z_pred_tf = graph.get_tensor_by_name('z_pred_tf:0')
self.y_pred_proba_tf = graph.get_tensor_by_name("y_pred_proba_tf:0")
self.y_pred_correct_tf = graph.get_tensor_by_name('y_pred_correct_tf:0')
self.accuracy_tf = graph.get_tensor_by_name('accuracy_tf:0')
# tensor of stored losses and accuricies during training
self.train_loss_tf = graph.get_tensor_by_name("train_loss_tf:0")
self.train_acc_tf = graph.get_tensor_by_name("train_acc_tf:0")
self.valid_loss_tf = graph.get_tensor_by_name("valid_loss_tf:0")
self.valid_acc_tf = graph.get_tensor_by_name("valid_acc_tf:0")
return None
# get losses of training and validation sets
def get_loss(self, sess):
train_loss = self.train_loss_tf.eval(session = sess)
valid_loss = self.valid_loss_tf.eval(session = sess)
return train_loss, valid_loss
# get accuracies of training and validation sets
def get_accuracy(self, sess):
train_acc = self.train_acc_tf.eval(session = sess)
valid_acc = self.valid_acc_tf.eval(session = sess)
return train_acc, valid_acc
# get weights
def get_weights(self, sess):
W_conv1 = self.W_conv1_tf.eval(session = sess)
W_conv2 = self.W_conv2_tf.eval(session = sess)
W_conv3 = self.W_conv3_tf.eval(session = sess)
W_fc1_tf = self.W_fc1_tf.eval(session = sess)
W_fc2_tf = self.W_fc2_tf.eval(session = sess)
return W_conv1, W_conv2, W_conv3, W_fc1_tf, W_fc2_tf
# get biases
def get_biases(self, sess):
b_conv1 = self.b_conv1_tf.eval(session = sess)
b_conv2 = self.b_conv2_tf.eval(session = sess)
b_conv3 = self.b_conv3_tf.eval(session = sess)
b_fc1_tf = self.b_fc1_tf.eval(session = sess)
b_fc2_tf = self.b_fc2_tf.eval(session = sess)
return b_conv1, b_conv2, b_conv3, b_fc1_tf, b_fc2_tf
# load session from file, restore graph, and load tensors
def load_session_from_file(self, filename):
tf.reset_default_graph()
filepath = os.path.join(os.getcwd(), filename + '.meta')
#filepath = os.path.join(os.getcwd(),'logs', filename + '.meta')
saver = tf.train.import_meta_graph(filepath)
print(filepath)
sess = tf.Session()
saver.restore(sess, mn)
graph = tf.get_default_graph()
self.load_tensors(graph)
return sess
# receive activations given the input
def get_activations(self, sess, x_data):
feed_dict = {self.x_data_tf: x_data, self.keep_prob_tf: 1.0}
h_conv1 = self.h_conv1_tf.eval(session = sess, feed_dict = feed_dict)
h_pool1 = self.h_pool1_tf.eval(session = sess, feed_dict = feed_dict)
h_conv2 = self.h_conv2_tf.eval(session = sess, feed_dict = feed_dict)
h_pool2 = self.h_pool2_tf.eval(session = sess, feed_dict = feed_dict)
h_conv3 = self.h_conv3_tf.eval(session = sess, feed_dict = feed_dict)
h_pool3 = self.h_pool3_tf.eval(session = sess, feed_dict = feed_dict)
h_fc1 = self.h_fc1_tf.eval(session = sess, feed_dict = feed_dict)
h_fc2 = self.z_pred_tf.eval(session = sess, feed_dict = feed_dict)
return h_conv1,h_pool1,h_conv2,h_pool2,h_conv3,h_pool3,h_fc1,h_fc2
```
# 6. Train and Validate the Neural Network
```
#first try out some sklearn models
#train the neural network
#visualize the losses, accuracies, the weights and the activations
#tune the hyperparameters
## train the neural network graph
#nn_name = ['nn0','nn1','nn2','nn3','nn4','nn5','nn6','nn7','nn8','nn9']
nn_name = ['tmp']
# cross validations
cv_num = 10 # cross validations default = 20 => 5% validation set
kfold = sklearn.model_selection.KFold(cv_num, shuffle=True, random_state=123)
for i,(train_index, valid_index) in enumerate(kfold.split(x_train_valid)):
# start timer
start = datetime.datetime.now();
# train and validation data of original images
x_train = x_train_valid[train_index]
y_train = y_train_valid[train_index]
x_valid = x_train_valid[valid_index]
y_valid = y_train_valid[valid_index]
# create neural network graph
nn_graph = nn_class(nn_name = nn_name[i]) # instance of nn_class
nn_graph.create_graph() # create graph
nn_graph.attach_saver() # attach saver tensors
# start tensorflow session
with tf.Session() as sess:
# attach summaries
nn_graph.attach_summary(sess)
# variable initialization of the default graph
sess.run(tf.global_variables_initializer())
# training on original data
nn_graph.train_graph(sess, x_train, y_train, x_valid, y_valid, n_epoch = 1.0)
# training on augmented data
nn_graph.train_graph(sess, x_train, y_train, x_valid, y_valid, n_epoch = 14.0,
train_on_augmented_data = True)
# save tensors and summaries of model
nn_graph.save_model(sess)
# only one iteration
if True:
break;
print('total running time for training: ', datetime.datetime.now() - start)
## visualization with tensorboard
if False:
!tensorboard --logdir=./logs
## show confusion matrix
mn = nn_name[0]
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(mn)
y_valid_pred[mn] = nn_graph.forward(sess, x_valid)
sess.close()
cnf_matrix = sklearn.metrics.confusion_matrix(
one_hot_to_dense(y_valid_pred[mn]), one_hot_to_dense(y_valid)).astype(np.float32)
labels_array = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
fig, ax = plt.subplots(1,figsize=(10,10))
ax = sns.heatmap(cnf_matrix, ax=ax, cmap=plt.cm.Greens, annot=True)
ax.set_xticklabels(labels_array)
ax.set_yticklabels(labels_array)
plt.title('Confusion matrix of validation set')
plt.ylabel('True digit')
plt.xlabel('Predicted digit')
plt.show();
## loss and accuracy curves
mn = nn_name[0]
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(mn)
train_loss[mn], valid_loss[mn] = nn_graph.get_loss(sess)
train_acc[mn], valid_acc[mn] = nn_graph.get_accuracy(sess)
sess.close()
print('final train/valid loss = %.4f/%.4f, train/valid accuracy = %.4f/%.4f'%(
train_loss[mn][-1], valid_loss[mn][-1], train_acc[mn][-1], valid_acc[mn][-1]))
plt.figure(figsize=(10, 5));
plt.subplot(1,2,1);
plt.plot(np.arange(0,len(train_acc[mn])), train_acc[mn],'-b', label='Training')
plt.plot(np.arange(0,len(valid_acc[mn])), valid_acc[mn],'-g', label='Validation')
plt.legend(loc='lower right', frameon=False)
plt.ylim(ymax = 1.1, ymin = 0.0)
plt.ylabel('accuracy')
plt.xlabel('log steps');
plt.subplot(1,2,2)
plt.plot(np.arange(0,len(train_loss[mn])), train_loss[mn],'-b', label='Training')
plt.plot(np.arange(0,len(valid_loss[mn])), valid_loss[mn],'-g', label='Validation')
plt.legend(loc='lower right', frameon=False)
plt.ylim(ymax = 3.0, ymin = 0.0)
plt.ylabel('loss')
plt.xlabel('log steps');
## visualize weights
mn = nn_name[0]
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(mn)
W_conv1, W_conv2, W_conv3, _, _ = nn_graph.get_weights(sess)
sess.close()
print('W_conv1: min = ' + str(np.min(W_conv1)) + ' max = ' + str(np.max(W_conv1))
+ ' mean = ' + str(np.mean(W_conv1)) + ' std = ' + str(np.std(W_conv1)))
print('W_conv2: min = ' + str(np.min(W_conv2)) + ' max = ' + str(np.max(W_conv2))
+ ' mean = ' + str(np.mean(W_conv2)) + ' std = ' + str(np.std(W_conv2)))
print('W_conv3: min = ' + str(np.min(W_conv3)) + ' max = ' + str(np.max(W_conv3))
+ ' mean = ' + str(np.mean(W_conv3)) + ' std = ' + str(np.std(W_conv3)))
s_f_conv1 = nn_graph.s_f_conv1
s_f_conv2 = nn_graph.s_f_conv2
s_f_conv3 = nn_graph.s_f_conv3
W_conv1 = np.reshape(W_conv1,(s_f_conv1,s_f_conv1,1,6,6))
W_conv1 = np.transpose(W_conv1,(3,0,4,1,2))
W_conv1 = np.reshape(W_conv1,(s_f_conv1*6,s_f_conv1*6,1))
W_conv2 = np.reshape(W_conv2,(s_f_conv2,s_f_conv2,6,6,36))
W_conv2 = np.transpose(W_conv2,(2,0,3,1,4))
W_conv2 = np.reshape(W_conv2,(6*s_f_conv2,6*s_f_conv2,6,6))
W_conv2 = np.transpose(W_conv2,(2,0,3,1))
W_conv2 = np.reshape(W_conv2,(6*6*s_f_conv2,6*6*s_f_conv2))
W_conv3 = np.reshape(W_conv3,(s_f_conv3,s_f_conv3,6,6,36))
W_conv3 = np.transpose(W_conv3,(2,0,3,1,4))
W_conv3 = np.reshape(W_conv3,(6*s_f_conv3,6*s_f_conv3,6,6))
W_conv3 = np.transpose(W_conv3,(2,0,3,1))
W_conv3 = np.reshape(W_conv3,(6*6*s_f_conv3,6*6*s_f_conv3))
plt.figure(figsize=(15,5))
plt.subplot(1,3,1)
plt.gca().set_xticks(np.arange(-0.5, s_f_conv1*6, s_f_conv1), minor = False);
plt.gca().set_yticks(np.arange(-0.5, s_f_conv1*6, s_f_conv1), minor = False);
plt.grid(which = 'minor', color='b', linestyle='-', linewidth=1)
plt.title('W_conv1 ' + str(W_conv1.shape))
plt.colorbar(plt.imshow(W_conv1[:,:,0], cmap=cm.binary));
plt.subplot(1,3,2)
plt.gca().set_xticks(np.arange(-0.5, 6*6*s_f_conv2, 6*s_f_conv2), minor = False);
plt.gca().set_yticks(np.arange(-0.5, 6*6*s_f_conv2, 6*s_f_conv2), minor = False);
plt.grid(which = 'minor', color='b', linestyle='-', linewidth=1)
plt.title('W_conv2 ' + str(W_conv2.shape))
plt.colorbar(plt.imshow(W_conv2[:,:], cmap=cm.binary));
plt.subplot(1,3,3)
plt.gca().set_xticks(np.arange(-0.5, 6*6*s_f_conv3, 6*s_f_conv3), minor = False);
plt.gca().set_yticks(np.arange(-0.5, 6*6*s_f_conv3, 6*s_f_conv3), minor = False);
plt.grid(which = 'minor', color='b', linestyle='-', linewidth=1)
plt.title('W_conv3 ' + str(W_conv3.shape))
plt.colorbar(plt.imshow(W_conv3[:,:], cmap=cm.binary));
## visualize activations
img_no = 10;
mn = nn_name[0]
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(mn)
(h_conv1, h_pool1, h_conv2, h_pool2,h_conv3, h_pool3, h_fc1,
h_fc2) = nn_graph.get_activations(sess, x_train_valid[img_no:img_no+1])
sess.close()
# original image
plt.figure(figsize=(15,9))
plt.subplot(2,4,1)
plt.imshow(x_train_valid[img_no].reshape(28,28),cmap=cm.binary);
# 1. convolution
plt.subplot(2,4,2)
plt.title('h_conv1 ' + str(h_conv1.shape))
h_conv1 = np.reshape(h_conv1,(-1,28,28,6,6))
h_conv1 = np.transpose(h_conv1,(0,3,1,4,2))
h_conv1 = np.reshape(h_conv1,(-1,6*28,6*28))
plt.imshow(h_conv1[0], cmap=cm.binary);
# 1. max pooling
plt.subplot(2,4,3)
plt.title('h_pool1 ' + str(h_pool1.shape))
h_pool1 = np.reshape(h_pool1,(-1,14,14,6,6))
h_pool1 = np.transpose(h_pool1,(0,3,1,4,2))
h_pool1 = np.reshape(h_pool1,(-1,6*14,6*14))
plt.imshow(h_pool1[0], cmap=cm.binary);
# 2. convolution
plt.subplot(2,4,4)
plt.title('h_conv2 ' + str(h_conv2.shape))
h_conv2 = np.reshape(h_conv2,(-1,14,14,6,6))
h_conv2 = np.transpose(h_conv2,(0,3,1,4,2))
h_conv2 = np.reshape(h_conv2,(-1,6*14,6*14))
plt.imshow(h_conv2[0], cmap=cm.binary);
# 2. max pooling
plt.subplot(2,4,5)
plt.title('h_pool2 ' + str(h_pool2.shape))
h_pool2 = np.reshape(h_pool2,(-1,7,7,6,6))
h_pool2 = np.transpose(h_pool2,(0,3,1,4,2))
h_pool2 = np.reshape(h_pool2,(-1,6*7,6*7))
plt.imshow(h_pool2[0], cmap=cm.binary);
# 3. convolution
plt.subplot(2,4,6)
plt.title('h_conv3 ' + str(h_conv3.shape))
h_conv3 = np.reshape(h_conv3,(-1,7,7,6,6))
h_conv3 = np.transpose(h_conv3,(0,3,1,4,2))
h_conv3 = np.reshape(h_conv3,(-1,6*7,6*7))
plt.imshow(h_conv3[0], cmap=cm.binary);
# 3. max pooling
plt.subplot(2,4,7)
plt.title('h_pool2 ' + str(h_pool3.shape))
h_pool3 = np.reshape(h_pool3,(-1,4,4,6,6))
h_pool3 = np.transpose(h_pool3,(0,3,1,4,2))
h_pool3 = np.reshape(h_pool3,(-1,6*4,6*4))
plt.imshow(h_pool3[0], cmap=cm.binary);
# 4. FC layer
plt.subplot(2,4,8)
plt.title('h_fc1 ' + str(h_fc1.shape))
h_fc1 = np.reshape(h_fc1,(-1,24,24))
plt.imshow(h_fc1[0], cmap=cm.binary);
# 5. FC layer
np.set_printoptions(precision=2)
print('h_fc2 = ', h_fc2)
mn = nn_name[0]
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(mn)
y_valid_pred[mn] = nn_graph.forward(sess, x_valid)
sess.close()
y_valid_pred_label = one_hot_to_dense(y_valid_pred[mn])
y_valid_label = one_hot_to_dense(y_valid)
y_val_false_index = []
for i in range(y_valid_label.shape[0]):
if y_valid_pred_label[i] != y_valid_label[i]:
y_val_false_index.append(i)
print('# false predictions: ', len(y_val_false_index),'out of', len(y_valid))
plt.figure(figsize=(10,15))
for j in range(0,5):
for i in range(0,10):
if j*10+i<len(y_val_false_index):
plt.subplot(10,10,j*10+i+1)
plt.title('%d/%d'%(y_valid_label[y_val_false_index[j*10+i]],
y_valid_pred_label[y_val_false_index[j*10+i]]))
plt.imshow(x_valid[y_val_false_index[j*10+i]].reshape(28,28),cmap=cm.binary)
```
# 7. Stacking of models and training a meta-model
```
## read test data
test_df = pd.read_csv('test.csv')
# transforma and normalize test data
x_test = test_df.iloc[:,0:].values.reshape(-1,28,28,1) # (28000,28,28,1) array
x_test = x_test.astype(np.float)
x_test = normalize_data(x_test)
print('x_test.shape = ', x_test.shape)
# for saving results
y_test_pred = {}
y_test_pred_labels = {}
## Stacking of neural networks
if False:
take_models = ['nn0','nn1','nn2','nn3','nn4','nn5','nn6','nn7','nn8','nn9']
# cross validations
# choose the same seed as was done for training the neural nets
kfold = sklearn.model_selection.KFold(len(take_models), shuffle=True, random_state = 123)
# train and test data for meta model
x_train_meta = np.array([]).reshape(-1,10)
y_train_meta = np.array([]).reshape(-1,10)
x_test_meta = np.zeros((x_test.shape[0], 10))
print('Out-of-folds predictions:')
# make out-of-folds predictions from base models
for i,(train_index, valid_index) in enumerate(kfold.split(x_train_valid)):
# training and validation data
x_train = x_train_valid[train_index]
y_train = y_train_valid[train_index]
x_valid = x_train_valid[valid_index]
y_valid = y_train_valid[valid_index]
# load neural network and make predictions
mn = take_models[i]
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(mn)
y_train_pred[mn] = nn_graph.forward(sess, x_train[:len(x_valid)])
y_valid_pred[mn] = nn_graph.forward(sess, x_valid)
y_test_pred[mn] = nn_graph.forward(sess, x_test)
sess.close()
# create cloned model from base models
#model = sklearn.base.clone(base_models[take_models[i]])
#model.fit(x_train, y_train)
#y_train_pred_proba['tmp'] = model.predict_proba(x_train)[:,1]
#y_valid_pred_proba['tmp'] = model.predict_proba(x_valid)[:,1]
#y_test_pred_proba['tmp'] = model.predict_proba(x_test)[:,1]
# collect train and test data for meta model
x_train_meta = np.concatenate([x_train_meta, y_valid_pred[mn]])
y_train_meta = np.concatenate([y_train_meta, y_valid])
x_test_meta += y_test_pred[mn]
print(take_models[i],': train/valid accuracy = %.4f/%.4f'%(
accuracy_from_one_hot_labels(y_train_pred[mn], y_train[:len(x_valid)]),
accuracy_from_one_hot_labels(y_valid_pred[mn], y_valid)))
if False:
break;
# take average of test predictions
x_test_meta = x_test_meta/(i+1)
y_test_pred['stacked_models'] = x_test_meta
print('')
print('Stacked models: valid accuracy = %.4f'%accuracy_from_one_hot_labels(x_train_meta,
y_train_meta))
## use meta model
if False:
logreg = sklearn.linear_model.LogisticRegression(verbose=0, solver='lbfgs',
multi_class='multinomial')
# choose meta model
take_meta_model = 'logreg'
# train meta model
model = sklearn.base.clone(base_models[take_meta_model])
model.fit(x_train_meta, one_hot_to_dense(y_train_meta))
y_train_pred['meta_model'] = model.predict_proba(x_train_meta)
y_test_pred['meta_model'] = model.predict_proba(x_test_meta)
print('Meta model: train accuracy = %.4f'%accuracy_from_one_hot_labels(x_train_meta,
y_train_pred['meta_model']))
## choose one single model for test prediction
if True:
mn = nn_name[0] # choose saved model
nn_graph = nn_class() # create instance
sess = nn_graph.load_session_from_file(mn) # receive session
y_test_pred = {}
y_test_pred_labels = {}
# split evaluation of test predictions into batches
kfold = sklearn.model_selection.KFold(40, shuffle=False)
for i,(train_index, valid_index) in enumerate(kfold.split(x_test)):
if i==0:
y_test_pred[mn] = nn_graph.forward(sess, x_test[valid_index])
else:
y_test_pred[mn] = np.concatenate([y_test_pred[mn],
nn_graph.forward(sess, x_test[valid_index])])
sess.close()
```
# Submit the test results
```
# choose the test predictions and submit the results
#mn = 'meta_model'
mn = nn_name[0]
y_test_pred_labels[mn] = one_hot_to_dense(y_test_pred[mn])
print(mn+': y_test_pred_labels[mn].shape = ', y_test_pred_labels[mn].shape)
unique, counts = np.unique(y_test_pred_labels[mn], return_counts=True)
print(dict(zip(unique, counts)))
# save predictions
np.savetxt('submission.csv',
np.c_[range(1,len(x_test)+1), y_test_pred_labels[mn]],
delimiter=',',
header = 'ImageId,Label',
comments = '',
fmt='%d')
print('submission.csv completed')
plt.figure(figsize=(10,15))
for j in range(0,5):
for i in range(0,10):
plt.subplot(10,10,j*10+i+1)
plt.title('%d'%y_test_pred_labels[mn][j*10+i])
plt.imshow(x_test[j*10+i].reshape(28,28), cmap=cm.binary)
```
|
github_jupyter
|
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Peeker-Groups" data-toc-modified-id="Peeker-Groups-1"><span class="toc-item-num">1 </span>Peeker Groups</a></span></li></ul></div>
# Peeker Groups
`Peeker` objects are normally stored in a global list, but sometimes you might want
to create a group of `Peeker`s for a set of signals.
This is easily done using the `PeekerGroup` class.
Once again, I'll use the hierarchical adder example to illustrate the use of `PeekerGroup`s.
```
from myhdl import *
from myhdlpeek import Peeker, PeekerGroup
def adder_bit(a, b, c_in, sum_, c_out):
'''Single bit adder.'''
@always_comb
def adder_logic():
sum_.next = a ^ b ^ c_in
c_out.next = (a & b) | (a & c_in) | (b & c_in)
# Add some global peekers to monitor the inputs and outputs.
Peeker(a, 'a')
Peeker(b, 'b')
Peeker(c_in, 'c_in')
Peeker(sum_, 'sum')
Peeker(c_out, 'c_out')
return adder_logic
def adder(a, b, sum_):
'''Connect single-bit adders to create a complete adder.'''
c = [Signal(bool(0)) for _ in range(len(a)+1)] # Carry signals between stages.
s = [Signal(bool(0)) for _ in range(len(a))] # Sum bit for each stage.
stages = [] # Storage for adder bit instances.
# Create the adder bits and connect them together.
for i in range(len(a)):
stages.append( adder_bit(a=a(i), b=b(i), sum_=s[i], c_in=c[i], c_out=c[i+1]) )
# Concatenate the sum bits and send them out on the sum_ output.
@always_comb
def make_sum():
sum_.next = ConcatSignal(*reversed(s))
return instances() # Return all the adder stage instances.
# Create signals for interfacing to the adder.
a, b, sum_ = [Signal(intbv(0,0,8)) for _ in range(3)]
# Clear-out any existing peeker stuff before instantiating the adder.
Peeker.clear()
# Instantiate the adder.
add_1 = adder(a=a, b=b, sum_=sum_)
# Create a group of peekers to monitor the top-level buses.
# Each argument to PeekerGroup assigns a signal to a name for a peeker.
top_pkr = PeekerGroup(a_bus=a, b_bus=b, sum_bus=sum_)
# Create a testbench generator that applies random inputs to the adder.
from random import randrange
def test():
for _ in range(8):
a.next, b.next = randrange(0, a.max), randrange(0, a.max)
yield delay(1)
# Simulate the adder, testbench and peekers.
Simulation(add_1, test(), *Peeker.instances()).run()
# Display only the peekers for the top-level buses.
# The global peekers in the adder bits won't show up.
top_pkr.show_waveforms('a_bus b_bus sum_bus')
top_pkr.to_html_table('a_bus b_bus sum_bus')
```
|
github_jupyter
|
# Naive Sampling function - Sample with uniform estimate user pp scores
**Contributors:** Victor Lin
```
import sys
sys.path.append('../..')
from datetime import datetime
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st
from exploration.config import mongo_inst
from mlpp.data_collection.sample import use_random_sample, get_custom_user_ids
from mlpp.data_collection.sample_func import sampleFuncGenerator
from mlpp.data_collection.distributions import best_fit_distribution
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
osu_random_db = mongo_inst['osu_random_db']
osu_dump = (osu_random_db['osu_scores_high'], osu_random_db['osu_user_stats'])
osu_scores_high, osu_user_stats = osu_dump
pdf_dump = (osu_random_db['scores_sample_3k'], osu_random_db['users_sample_3k'])
pdf_scores_sample, pdf_users_sample = pdf_dump
DATE_LIMIT = datetime(2019,1,1)
generator = sampleFuncGenerator(date_limit = DATE_LIMIT)
user_ids = use_random_sample(*osu_dump, *pdf_dump, 3000)
scores = list(pdf_scores_sample.find({'date': {'$gt': DATE_LIMIT}}, {'mlpp': 1, '_id': 0}))
pp_data_raw = [s['mlpp']['est_user_raw_pp'] for s in scores]
pp_data = [s['mlpp']['est_user_pp'] for s in scores]
fig, axs = plt.subplots(1, 2)
fig.set_figwidth(15)
_ = axs[0].hist(pp_data_raw, bins = 200)
_ = axs[1].hist(pp_data, bins = 200)
# best_dist, best_params = best_fit_distribution(pp_data)
best_dist = st.recipinvgauss
best_params = best_dist.fit(pp_data)
arg = best_params[:-2]
loc = best_params[-2]
scale = best_params[-1]
pdf = lambda i: best_dist.pdf(i, loc=loc, scale=scale, *arg)
plt.figure(figsize=(10,6))
est_pp_pdf = best_dist.pdf(np.arange(1, 7000), loc=loc, scale=scale, *arg)
_ = plt.hist(pp_data, bins = 200, density=True)
_ = plt.plot(est_pp_pdf)
SAMPLE_PROPORTIONS = np.asarray([.01, .02, .05, .1])
pcnts = [int(prop * 100) for prop in SAMPLE_PROPORTIONS]
sample_funcs = [generator.pdf(pdf_scores_sample, st.recipinvgauss, prop) for prop in SAMPLE_PROPORTIONS]
for i, f in enumerate(sample_funcs):
plt.plot(f, label = f'{pcnts[i]}%')
_ = plt.legend(loc='upper left')
def test_pcnt_sampled(sample_func):
sampled_users = get_custom_user_ids(osu_user_stats, sample_func)
sampled_scores = osu_scores_high.find({
'user_id': {
'$in': sampled_users
},
'date': {
'$gt': DATE_LIMIT
}
}, {'mlpp.est_user_pp': 1})
return sampled_scores.count() / osu_scores_high.count()
from tqdm import tqdm
pcnt_1_avg_of_expected = sum(test_pcnt_sampled(sample_funcs[0]) / .01 for i in tqdm(range(10))) / 10
pcnt_2_avg_of_expected = sum(test_pcnt_sampled(sample_funcs[1]) / .02 for i in tqdm(range(10))) / 10
print(f'\n\nProportion of expected 1%: {pcnt_1_avg_of_expected:.2f}')
print(f'Proportion of expected 2%: {pcnt_2_avg_of_expected:.2f}')
PROP_BONUS_FACTOR = 1 / pcnt_1_avg_of_expected
SAMPLE_PROPORTIONS *= PROP_BONUS_FACTOR
sample_funcs = [sampleFuncGenerator().pdf(pdf_scores_sample, st.recipinvgauss, prop) for prop in SAMPLE_PROPORTIONS]
samples = []
for i, f in enumerate(sample_funcs):
samples.append(generator.test_sample_func(*osu_dump, sample_funcs[i]))
scores = samples[-1][0]
pcnt_scores = 100 * len(scores) / osu_scores_high.count()
print(f"{pcnts[i]}% Sampling: {pcnt_scores:.2f}% sampled")
score_pp = [[s['mlpp']['est_user_pp'] for s in sc] for sc, u in samples]
fig, axs = plt.subplots(4, figsize=(6, 18))
for i in range(len(SAMPLE_PROPORTIONS)):
ax = axs[i]
ax.hist(score_pp[i], bins = 50, label = f'{pcnts[i]}%', density = True)
ax.plot([0, 7000], [1/7000, 1/7000])
ax.set(xlabel = "Proportion", ylabel="Score est pp")
ax.set_title(f'{pcnts[i]}% Sample')
_ = plt.tight_layout()
```
<a style='text-decoration:none;line-height:16px;display:flex;color:#5B5B62;padding:10px;justify-content:end;' href='https://deepnote.com?utm_source=created-in-deepnote-cell&projectId=f93d0822-db5a-47ef-9a78-57b8adfbeb20' target="_blank">
<img style='display:inline;max-height:16px;margin:0px;margin-right:7.5px;' src='data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iODBweCIgaGVpZ2h0PSI4MHB4IiB2aWV3Qm94PSIwIDAgODAgODAiIHZlcnNpb249IjEuMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayI+CiAgICA8IS0tIEdlbmVyYXRvcjogU2tldGNoIDU0LjEgKDc2NDkwKSAtIGh0dHBzOi8vc2tldGNoYXBwLmNvbSAtLT4KICAgIDx0aXRsZT5Hcm91cCAzPC90aXRsZT4KICAgIDxkZXNjPkNyZWF0ZWQgd2l0aCBTa2V0Y2guPC9kZXNjPgogICAgPGcgaWQ9IkxhbmRpbmciIHN0cm9rZT0ibm9uZSIgc3Ryb2tlLXdpZHRoPSIxIiBmaWxsPSJub25lIiBmaWxsLXJ1bGU9ImV2ZW5vZGQiPgogICAgICAgIDxnIGlkPSJBcnRib2FyZCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTEyMzUuMDAwMDAwLCAtNzkuMDAwMDAwKSI+CiAgICAgICAgICAgIDxnIGlkPSJHcm91cC0zIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgxMjM1LjAwMDAwMCwgNzkuMDAwMDAwKSI+CiAgICAgICAgICAgICAgICA8cG9seWdvbiBpZD0iUGF0aC0yMCIgZmlsbD0iIzAyNjVCNCIgcG9pbnRzPSIyLjM3NjIzNzYyIDgwIDM4LjA0NzY2NjcgODAgNTcuODIxNzgyMiA3My44MDU3NTkyIDU3LjgyMTc4MjIgMzIuNzU5MjczOSAzOS4xNDAyMjc4IDMxLjY4MzE2ODMiPjwvcG9seWdvbj4KICAgICAgICAgICAgICAgIDxwYXRoIGQ9Ik0zNS4wMDc3MTgsODAgQzQyLjkwNjIwMDcsNzYuNDU0OTM1OCA0Ny41NjQ5MTY3LDcxLjU0MjI2NzEgNDguOTgzODY2LDY1LjI2MTk5MzkgQzUxLjExMjI4OTksNTUuODQxNTg0MiA0MS42NzcxNzk1LDQ5LjIxMjIyODQgMjUuNjIzOTg0Niw0OS4yMTIyMjg0IEMyNS40ODQ5Mjg5LDQ5LjEyNjg0NDggMjkuODI2MTI5Niw0My4yODM4MjQ4IDM4LjY0NzU4NjksMzEuNjgzMTY4MyBMNzIuODcxMjg3MSwzMi41NTQ0MjUgTDY1LjI4MDk3Myw2Ny42NzYzNDIxIEw1MS4xMTIyODk5LDc3LjM3NjE0NCBMMzUuMDA3NzE4LDgwIFoiIGlkPSJQYXRoLTIyIiBmaWxsPSIjMDAyODY4Ij48L3BhdGg+CiAgICAgICAgICAgICAgICA8cGF0aCBkPSJNMCwzNy43MzA0NDA1IEwyNy4xMTQ1MzcsMC4yNTcxMTE0MzYgQzYyLjM3MTUxMjMsLTEuOTkwNzE3MDEgODAsMTAuNTAwMzkyNyA4MCwzNy43MzA0NDA1IEM4MCw2NC45NjA0ODgyIDY0Ljc3NjUwMzgsNzkuMDUwMzQxNCAzNC4zMjk1MTEzLDgwIEM0Ny4wNTUzNDg5LDc3LjU2NzA4MDggNTMuNDE4MjY3Nyw3MC4zMTM2MTAzIDUzLjQxODI2NzcsNTguMjM5NTg4NSBDNTMuNDE4MjY3Nyw0MC4xMjg1NTU3IDM2LjMwMzk1NDQsMzcuNzMwNDQwNSAyNS4yMjc0MTcsMzcuNzMwNDQwNSBDMTcuODQzMDU4NiwzNy43MzA0NDA1IDkuNDMzOTE5NjYsMzcuNzMwNDQwNSAwLDM3LjczMDQ0MDUgWiIgaWQ9IlBhdGgtMTkiIGZpbGw9IiMzNzkzRUYiPjwvcGF0aD4KICAgICAgICAgICAgPC9nPgogICAgICAgIDwvZz4KICAgIDwvZz4KPC9zdmc+' > </img>
Created in <span style='font-weight:600;margin-left:4px;'>Deepnote</span></a>
|
github_jupyter
|
# Multivariate Analysis for Planetary Atmospheres
This notebooks relies on the pickle dataframe in the `notebooks/` folder. You can also compute your own using `3_ColorColorFigs.ipynb`
```
#COLOR COLOR PACKAGE
from colorcolor import compute_colors as c
from colorcolor import stats
import matplotlib.pyplot as plt
import pandas as pd
import pickle as pk
import numpy as np
from itertools import combinations as comb
import seaborn as sns
%matplotlib inline
```
This dataframe contains:
- **independent variables** : filter observations
- **dependent variables** : physical planet parameters
```
data= pk.load(open('wfirst_colors_dataframe.pk','rb'))
data=data.dropna()[~data.dropna().isin([np.inf, -np.inf])].dropna() #drop infinities and nans
#let's specicy our y of interest for this tutorial, feel free to play around with this
yofinterest = 'metallicity'
#lets also specify a filter set. Let's just focus on WFIRST filters
filters = c.print_filters('wfirst')
#lets also specify a filter set. Let's just focus on WFIRST filters
filters = c.print_filters('wfirst')
#and also define the combinations: e.g. Filter1 - Filter2
filter_combinations = [i[0]+i[1] for i in comb(filters,2)] +filters
```
### Explore Correlation Matrix: Fig 6 Batalha+2018
In figure 6 we looked at the difference between the correlation matrix with and without the cloud sample
```
#lets look at only the cloud free sample
corr_matrix = data.loc[(data['cloud']==0)].corr()
fig, ax = plt.subplots(figsize=(25,10))
#here I am simplifying the image by adding in an absolute value
#you can remove it if you are interested in seeing what is positive and nagatively correlated
sns.heatmap(abs(corr_matrix), vmax=1, square=False, linewidths=.5, ax=ax).xaxis.tick_top()
```
Figure 6 in Batalha 2018 is a subset of this larger block
```
#lets look at everything
corr_matrix = data.corr()
fig, ax = plt.subplots(figsize=(25,10))
#here I am simplifying the image by adding in an absolute value
#you can remove it if you are interested in seeing what is positive and nagatively correlated
sns.heatmap(abs(corr_matrix), vmax=1, square=False, linewidths=.5, ax=ax).xaxis.tick_top()
```
** See immediately how there are less strongly correlated values for physical parameters versus filters??**
## Try Linear Discriminant Analysis For Classification
```
#try cloud free first
subset = data.loc[(data['cloud']==0) & (data['phase']==90)]
#separate independent
X = subset.loc[:,filter_combinations]
#and dependent variables (also this make it a string so we can turn it into a label)
y = subset[yofinterest].astype(str)
lda_values=stats.lda_analysis(X,y)
```
These warnings are coming up because we have used both absolute and relative filters. Because LDA, like regression techniques involves computing a matrix inversion, which is inaccurate if the determinant is close to 0 (i.e. two or more variables are almost a linear combination of each other). This means that our relative filter and absolute combinations are nearly a linear combination of each other (which makes sense). For classification purposes this is okay for now.
```
#Now lets unconstrain the phase
subset = data.loc[(data['cloud']==0)]
#separate independent
X = subset.loc[:,filter_combinations]
#and dependent variables (also this make it a string so we can turn it into a label)
y = subset[yofinterest].astype(str)
lda_values=stats.lda_analysis(X,y)
#Now lets unconstrain everything
subset = data
#separate independent
X = subset.loc[:,filter_combinations]
#and dependent variables (also this make it a string so we can turn it into a label)
y = subset[yofinterest].astype(str)
lda_values=stats.lda_analysis(X,y)
```
|
github_jupyter
|
```
from sklearn.cluster import MeanShift, estimate_bandwidth
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import datetime
import math
import os
import sys
from numpy.fft import fft, ifft
import glob
def remove_periodic(X, df_index, detrending=True, model='additive', frequency_threshold=0.1e12):
rad = np.array(X)
if detrending:
det_rad = rad - np.average(rad)
else:
det_rad = rad
det_rad_fft = fft(det_rad)
# Get the power spectrum
rad_ps = [np.abs(rd)**2 for rd in det_rad_fft]
clean_rad_fft = [det_rad_fft[i] if rad_ps[i] > frequency_threshold else 0
for i in range(len(det_rad_fft))]
rad_series_clean = ifft(clean_rad_fft)
rad_series_clean = [value.real for value in rad_series_clean]
if detrending:
rad_trends = rad_series_clean + np.average(rad)
else:
rad_trends = rad_series_clean
rad_clean_ts = pd.Series(rad_trends, index=df_index)
#rad_clean_ts[(rad_clean_ts.index.hour < 6) | (rad_clean_ts.index.hour > 20)] = 0
residual = rad - rad_clean_ts.values
clean = rad_clean_ts.values
return residual, clean
def load_data(path, resampling=None):
## some resampling options: 'H' - hourly, '15min' - 15 minutes, 'M' - montlhy
## more options at:
## http://benalexkeen.com/resampling-time-series-data-with-pandas/
allFiles = glob.iglob(path + "/**/*.txt", recursive=True)
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
#print("Reading: ",file_)
df = pd.read_csv(file_,index_col="datetime",parse_dates=['datetime'], header=0, sep=",")
if frame.columns is None :
frame.columns = df.columns
list_.append(df)
frame = pd.concat(list_)
if resampling is not None:
frame = frame.resample(resampling).mean()
frame = frame.fillna(method='ffill')
return frame
path = '/Users/cseveriano/spatio-temporal-forecasting/data/processed/NREL/Oahu'
df = load_data(path)
# Corrigir ordem das colunas
df.columns = ['DHHL_3','DHHL_4', 'DHHL_5', 'DHHL_10', 'DHHL_11', 'DHHL_9', 'DHHL_2', 'DHHL_1', 'DHHL_1_Tilt', 'AP_6', 'AP_6_Tilt', 'AP_1', 'AP_3', 'AP_5', 'AP_4', 'AP_7', 'DHHL_6', 'DHHL_7', 'DHHL_8']
#inicio dos dados possui falhas na medicao
df = df.loc[df.index > '2010-03-20']
df.drop(['DHHL_1_Tilt', 'AP_6_Tilt'], axis=1, inplace=True)
```
## Preparação bases de treinamento e testes
```
clean_df = pd.DataFrame(columns=df.columns, index=df.index)
residual_df = pd.DataFrame(columns=df.columns, index=df.index)
for col in df.columns:
residual, clean = remove_periodic(df[col].tolist(), df.index, frequency_threshold=0.01e12)
clean_df[col] = clean.tolist()
residual_df[col] = residual.tolist()
train_df = df[(df.index >= '2010-09-01') & (df.index <= '2011-09-01')]
train_clean_df = clean_df[(clean_df.index >= '2010-09-01') & (clean_df.index <= '2011-09-01')]
train_residual_df = residual_df[(residual_df.index >= '2010-09-01') & (residual_df.index <= '2011-09-01')]
test_df = df[(df.index >= '2010-08-05')& (df.index < '2010-08-06')]
test_clean_df = clean_df[(clean_df.index >= '2010-08-05')& (clean_df.index < '2010-08-06')]
test_residual_df = residual_df[(residual_df.index >= '2010-08-05')& (residual_df.index < '2010-08-06')]
lat = [21.31236,21.31303,21.31357,21.31183,21.31042,21.31268,21.31451,21.31533,21.30812,21.31276,21.31281,21.30983,21.31141,21.31478,21.31179,21.31418,21.31034]
lon = [-158.08463,-158.08505,-158.08424,-158.08554,-158.0853,-158.08688,-158.08534,-158.087,-158.07935,-158.08389,-158.08163,-158.08249,-158.07947,-158.07785,-158.08678,-158.08685,-158.08675]
additional_info = pd.DataFrame({'station': df.columns, 'latitude': lat, 'longitude': lon })
additional_info[(additional_info.station == col)].latitude.values[0]
#ll = []
#for ind, row in train_residual_df.iterrows():
# for col in train_residual_df.columns:
# lat = additional_info[(additional_info.station == col)].latitude.values[0]
# lon = additional_info[(additional_info.station == col)].longitude.values[0]
# doy = ind.dayofyear
# hour = ind.hour
# minute = ind.minute
# irradiance = row[col]
# ll.append([lat, lon, doy, hour, minute, irradiance])
#ms_df = pd.DataFrame(columns=['latitude','longitude','dayofyear', 'hour', 'minute','irradiance'], data=ll)
ll = []
for ind, row in train_residual_df.iterrows():
for col in train_residual_df.columns:
lat = additional_info[(additional_info.station == col)].latitude.values[0]
lon = additional_info[(additional_info.station == col)].longitude.values[0]
irradiance = row[col]
ll.append([lat, lon, irradiance])
ms_df = pd.DataFrame(columns=['latitude','longitude','irradiance'], data=ll)
ms_df
```
## Mean Shift
Normalização dos dados
```
from sklearn import preprocessing
x = ms_df.values #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
bandwidth
bandwidth = estimate_bandwidth(x_scaled, quantile=0.2, n_samples=int(len(ms_df)*0.1), n_jobs=-1)
ms = MeanShift(bandwidth=bandwidth, n_jobs=-1)
ms.fit(x_scaled)
labels = ms.labels_
cluster_centers = ms.cluster_centers_
labels_unique = np.unique(labels)
n_clusters_ = len(labels_unique)
print("number of estimated clusters : %d" % n_clusters_)
labels
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
from statsmodels.tsa.seasonal import seasonal_decompose
# our python file with functions
import altedc as altedc
data_path = 'data/'
data = pd.read_csv(f'{data_path}train_phase_1.csv')
data.date = pd.to_datetime(data.date, format='%Y-%m-%d %H:%M:%S')
test = pd.read_csv(f'{data_path}test_phase_1.csv')
test.date = pd.to_datetime(test.date, format='%Y-%m-%d %H:%M:%S')
data.shape, test.shape
assert data.dtypes.equals(pd.Series({
'date': 'datetime64[ns]',
'wp1': 'float64',
'u': 'float64',
'v': 'float64',
'ws': 'float64',
'wd': 'float64',
}))
assert test.dtypes.equals(pd.Series({
'date': 'datetime64[ns]',
'u': 'float64',
'v': 'float64',
'ws': 'float64',
'wd': 'float64',
}))
assert not data.isnull().any(axis=None) and not test.isnull().any(axis=None)
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
train, val = train_test_split(data, test_size=0.2, shuffle=True)
print(train.shape, val.shape)
X_train = train['ws'].values.reshape((-1, 1))
y_train = train['wp1'].values
X_val = val['ws'].values.reshape((-1, 1))
y_val = val['wp1'].values
lm = LinearRegression()
lm.fit(X_train, y_train)
mean_absolute_error(lm.predict(X_val), y_val)
from sklearn.dummy import DummyRegressor
dm = DummyRegressor(strategy='mean')
dm.fit(X_train, y_train)
print(mean_absolute_error(dm.predict(X_val), y_val))
dm = DummyRegressor(strategy='median')
dm.fit(X_train, y_train)
print(mean_absolute_error(dm.predict(X_val), y_val))
from sklearn.ensemble import RandomForestRegressor
X_train = train[['ws', 'wd', 'u', 'v']].values
y_train = train['wp1'].values
X_val = val[['ws', 'wd', 'u', 'v']].values
y_val = val['wp1'].values
rf = RandomForestRegressor(n_jobs=-1)
rf.fit(X_train, y_train)
print(mean_absolute_error(rf.predict(X_val), y_val))
#make new features
df_copy = altedc.create_features_train(train)
df_copy_val = altedc.create_features_val(val)
df_copy_test = altedc.create_features_test(test)
cols=['u', 'v', 'ws', 'wd', 'wva', 'mwd', 'vh', 'hr', 'mo', 'dy', 'wk', 'qr', 'doy', 'dow', 'yr', 'u_z', 'v_z', 'ws_z', 'wd_z', 'wsq', 'wdc', 'wvac', 'mwdc','rwd', 'u*v', 'wsr', 'ws100m', 'ws3', 'wpe','pa','vh3','daws','davh','year_sin','year_cos','month_sin', 'month_cos', 'quarter_sin', 'quarter_cos','semester_sin',
'semester_cos', 'twowk_sin', 'twowk_cos', 'week_sin', 'week_cos', 'twomonth_sin', 'twomonth_cos', 'fourmonth_sin',
'fourmonth_cos', 'fivemonth_sin', 'fivemonth_cos',
'sevenmonth_sin', 'sevenmonth_cos', 'ninemonth_sin', 'ninemonth_cos', 'thirteenmonth_cos', 'elevenmonth_cos','fivemonth_sin','sevenmonth_sin', 'sixteenmonth_cos','ti','vhi','uv_dir', 'wswdsin','wswdcos','twoday_sin','twoday_cos','tid','gust','dmws',#'wd_ns','wd_ew','wd_avg' ,'wsh_max','vhi_max','ws_diff','vhi_diff','ti_diff' #,'hmws' #, 'tiv', #'vhiv'#'gusti'#,'dwd', 'dti','pai','dvh','dpa'
# 'pai','vhpa'
] # 'vhpa' ,'vhid','paid', 'pai', 'haws', 'dmvh','dpa'
len(cols)
#Try RF on additional features
from sklearn.ensemble import RandomForestRegressor
X_train = df_copy[cols].values #'wva', 'mwd',
y_train = df_copy['wp1'].values
X_val = df_copy_val[cols].values #'wva', 'mwd',
y_val = df_copy_val['wp1'].values
rf = RandomForestRegressor(bootstrap=False, max_depth=25, max_features=12,
n_estimators=2000, n_jobs=-1)
rf.fit(X_train, y_train)
print(mean_absolute_error(rf.predict(X_val), y_val)) #min_samples_split =2, min_samples_leaf =1, max_depth=20, max_features=10, 0.06642867238363016,
# 0.06682952159550563 0.06961529322707194 0.06871897093902136 bootstrap false, max depth 25, max features 12, n_estimators 2000
#predictions on test set with RF and additional features
X_test = df_copy_test[cols].values
df_predictions = pd.DataFrame({
'date': df_copy_test['date'],
'wp1': rf.predict(X_test),
})
df_predictions.to_csv(r'predictions.csv', index=False, sep=';')
df_predictions.head()
!pip3 install xgboost
import xgboost as xgb
X_train = df_copy[cols].values #'wva', 'mwd',,'u*v','rwd'
y_train = df_copy['wp1'].values
X_val = df_copy_val[cols].values #'wva', 'mwd',,'u*v','rwd'
y_val = df_copy_val['wp1'].values
xgb = xgb.XGBRegressor(random_state=0)
xgb.fit(X_train, y_train)
print(mean_absolute_error(xgb.predict(X_val), y_val))
!pip install lightgbm
import lightgbm as lgb
X_train = df_copy[cols].values #'wva', 'mwd',
y_train = df_copy['wp1'].values
X_val = df_copy_val[cols].values #'wva', 'mwd',
y_val = df_copy_val['wp1'].values
lgbm = lgb.LGBMRegressor(objective="tweedie",max_depth=15,num_leaves=80,num_iterations=2500,learning_rate=0.01,min_data_in_leaf=100, min_sum_hessian_in_leaf=10, feature_fraction=0.6)
lgbm.fit(X_train, y_train)
print(mean_absolute_error(lgbm.predict(X_val), y_val))
#bagging regressor
from sklearn.ensemble import BaggingRegressor
X_train = df_copy[cols].values #'wva', 'mwd',,'u*v','rwd'
y_train = df_copy['wp1'].values
X_val = df_copy_val[cols].values #'wva', 'mwd',,'u*v','rwd'
y_val = df_copy_val['wp1'].values
br = BaggingRegressor(random_state=0,bootstrap=False, n_estimators=2000, max_features=25, max_samples=0.5)
br.fit(X_train, y_train)
print(mean_absolute_error(br.predict(X_val), y_val))
#RF feature importances
rf.feature_importances_
plt.rcParams.update({'figure.figsize': (12,30)})
sorted_idx = rf.feature_importances_.argsort()
plt.barh(df_copy[cols].columns[sorted_idx], rf.feature_importances_[sorted_idx])
plt.xlabel("Random Forest Feature Importance")
!pip3 install --upgrade pip
!pip3 install numpy
!pip3 install pandas
!pip3 install matplotlib
!pip3 install scikit-learn
!pip3 install keras
!pip3 install tensorflow
import sklearn
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
import os
import tensorflow as tf
import keras
from keras.models import Input, Sequential, load_model, Model
from keras.layers import Dense, LSTM, Dropout
from keras.layers import Conv1D, Flatten, MaxPooling1D
from keras.callbacks import EarlyStopping, ModelCheckpoint
from numpy.random import seed
seed(100)
tf.random.set_seed(100)
df=data.copy()
df_cols= df.columns.to_list()
# Define the features that will be used for training
train_columns = df_cols[1:]
# Get only the train_columns part of pandas
# Call split_data function and print the shape of each set
df_train, df_val, df_test = altedc.split_data(df[train_columns])
print('Train \n', df_train)
print('Val \n', df_val)
print('Test \n', df_test)
# Transform pandas to numpy arrays
train_data = df_train.values
val_data = df_val.values
test_data = df_test.values
print('Train data has shape', np.shape(train_data))
print('Validation data has shape', np.shape(val_data))
print('Test data has shape', np.shape(test_data))
# Feature Scaling using Min-Max Scaler
# Use fit_transform for train_data and transform for val, test data
scaler = MinMaxScaler(feature_range = (0, 1))
train_data = scaler.fit_transform(train_data)
val_data = scaler.transform(val_data)
test_data = scaler.transform(test_data)
# Set the desirable history for creating samples from the time-series
seq_len = 20
ahead = 1
X_train, y_train = altedc.create_supervised_data(train_data, seq_len=seq_len, ahead=ahead)
print('X_train shape is ', np.shape(X_train))
print('y_train shape is ', np.shape(y_train))
X_val, y_val = altedc.create_supervised_data(val_data, seq_len=seq_len, ahead=ahead)
print('X_val shape is ', np.shape(X_val))
print('y_val shape is ', np.shape(y_val))
X_test, y_test = altedc.create_supervised_data(test_data, seq_len=seq_len, ahead=ahead)
print('X_test shape is ', np.shape(X_test))
print('y_test shape is ', np.shape(y_test))
keras.backend.clear_session()
seed(100)
tf.random.set_seed(100)
model = altedc.create_model(np.shape(X_train)[1], np.shape(X_train)[2])
model.summary()
history = altedc.compile_and_fit(model, X_train, y_train, X_val, y_val, patience=10)
def print_metrics_model(X_train, y_train, X_val, y_val, X_test, y_test):
print('Evaluation metrics')
print(
'Training Data - MSE Loss: {:.8f}, MAE Loss: {:.8f}'.format(
model.evaluate(X_train, y_train, verbose=0)[0],
model.evaluate(X_train, y_train, verbose=0)[1]))
print(
'Validation Data - MSE Loss: {:.8f}, MAE Loss: {:.8f}'.format(
model.evaluate(X_val, y_val, verbose=0)[0],
model.evaluate(X_val, y_val, verbose=0)[1]))
print(
'Test Data - MSE Loss: {:.8f}, MAE Loss: {:.8f}'.format(
model.evaluate(X_test, y_test, verbose=0)[0],
model.evaluate(X_test, y_test, verbose=0)[1]))
return
altedc.plot_loss(history)
print_metrics_model(X_train, y_train, X_val, y_val, X_test, y_test)
predictions = model.predict(X_test)
print(predictions.shape)
altedc.plot_predictions_test(y_test.squeeze(), predictions)
print('Real mse on test', mean_squared_error(y_test.squeeze(),predictions))
model.save('model.h5')
```
|
github_jupyter
|
## loading an image
```
from PIL import Image
im = Image.open("lena.png")
```
## examine the file contents
```
from __future__ import print_function
print(im.format, im.size, im.mode)
```
- The *format* attribute identifies the source of an image. If the image was not read from a file, it is set to None.
- The *size* attribute is a 2-tuple containing width and height (in pixels).
- The *mode* attribute defines the number and names of the bands in the image,
## let’s display the image we just loaded
```
im.show()
```
- The standard version of show() is not very efficient, since it saves the image to a temporary file and calls the xv utility to display the image. If you don’t have xv installed, it won’t even work. When it does work though, it is very handy for debugging and tests.
## Reading and writing images
- Python Imaging Library (PIL)
- You don’t have to know the file format to open a file. The library automatically determines the format based on the contents of the file.
- Unless you specify the format, the library uses the filename extension to discover which file storage format to use.
## Convert all image files to JPEG
```
## ------------ignore--------------
from __future__ import print_function
import os, sys
from PIL import Image
for infile in sys.argv[1:]:
f, e = os.path.splitext(infile)
outfile = f + ".jpg"
if infile != outfile:
try:
Image.open(infile).save(outfile)
except IOError:
print("cannot convert", infile)
from __future__ import print_function
import os, sys
from PIL import Image
for infile in os.listdir(os.getcwd()):
f, e = os.path.splitext(infile)
print(f)
print(e)
outfile = f + ".jpg"
print(outfile)
if infile != outfile:
try:
Image.open(infile).save(outfile)
print('converted',infile,'to',outfile)
except IOError:
print("cannot convert", infile)
```
## Create JPEG thumbnails
```
from __future__ import print_function
import os, sys
from PIL import Image
size = (128, 128)
for infile in os.listdir(os.getcwd()):
outfile = os.path.splitext(infile)[0] + ".thumbnail"
print(infile, outfile)
if infile != outfile:
try:
im = Image.open(infile)
im.thumbnail(size)
im.save(outfile, "JPEG")
except IOError:
print("cannot create thumbnail for", infile)
print(os.path.splitext('how/are/you/a.png'))
```
- It is important to note that the library doesn’t decode or load the raster data unless it really has to. When you open a file, the file header is read to determine the file format and extract things like mode, size, and other properties required to decode the file, but the rest of the file is not processed until later.
- This means that opening an image file is a fast operation, which is independent of the file size and compression type.
## Identify Image Files
```
from __future__ import print_function
import sys
from PIL import Image
for infile in os.listdir(os.getcwd()):
#print(infile)
try:
with Image.open(infile) as im:
print(infile, im.format, "%dx%d" % im.size, im.mode)
print(type(im.size))
except IOError:
pass
```
## Cutting, pasting, and merging images
- The Image class contains methods allowing you to manipulate regions within an image. To extract a sub-rectangle from an image, use the crop() method.
## Copying a subrectangle from an image
```
im = Image.open("lena.png")
box = (100, 100, 400, 400)
region = im.crop(box)
```
- The region could now be processed in a certain manner and pasted back.
## Processing a subrectangle, and pasting it back
```
region = region.transpose(Image.ROTATE_180)
im.paste(region, box)
im.show()
im.save('pasted.png')
```
## Rolling an image
```
def roll(image, delta):
"Roll an image sideways"
xsize, ysize = image.size #width and height
delta = delta % xsize
if delta == 0: return image
part1 = image.crop((0, 0, delta, ysize))
part2 = image.crop((delta, 0, xsize, ysize))
image.paste(part2, (0, 0, xsize-delta, ysize))
image.paste(part1, (xsize-delta, 0, xsize, ysize))
return image
im = Image.open("lena.png")
print(im.size)
im.show(roll(im,10))
```
## Splitting and merging bands
```
im = Image.open("lena.png")
r, g, b = im.split()
im1 = Image.merge("RGB", (b, g, r))
im2 = Image.merge("RGB", (r, r, r))
im3 = Image.merge("RGB", (g, g, g))
im4 = Image.merge("RGB", (b, b, b))
im5 = Image.merge("RGB", (g, r, b))
print(im1.mode)
#im1.show()
#im2.show()
#im3.show()
#im4.show()
im5.show()
```
- Note that for a single-band image, split() returns the image itself. To work with individual color bands, you may want to convert the image to “RGB” first.
## Geometrical transforms
- The PIL.Image.Image class contains methods to resize() and rotate() an image. The former takes a tuple giving the new size, the latter the angle in degrees counter-clockwise.
## Simple geometry transforms
```
im = Image.open("lena.png")
out = im.resize((128, 128))
out.show()
out = im.rotate(45) # degrees counter-clockwise
out.show()
out.save('rotated.png')
```
- To rotate the image in 90 degree steps, you can either use the rotate() method or the transpose() method. The latter can also be used to flip an image around its horizontal or vertical axis.
## Transposing an image
```
out = im.transpose(Image.FLIP_LEFT_RIGHT)
out.save('transposing/l2r.png')
out = im.transpose(Image.FLIP_TOP_BOTTOM)
out.save('transposing/t2b.png')
out = im.transpose(Image.ROTATE_90)
out.save('transposing/90degree.png')
out = im.transpose(Image.ROTATE_180)
out.save('transposing/180degree.png')
out = im.transpose(Image.ROTATE_270)
out.save('transposing/270degree.png')
```
- There’s no difference in performance or result between **transpose(ROTATE)** and corresponding **rotate()** operations.
- A more general form of image transformations can be carried out via the transform() method.
## Color transforms
- The Python Imaging Library allows you to convert images between different pixel representations using the ***convert()*** method.
## Converting between modes
```
im = Image.open("lena.png").convert("L")
im.show()
```
- The library supports transformations between each supported mode and the “L” and “RGB” modes. To convert between other modes, you may have to use an intermediate image (typically an “RGB” image).
## Image enhancement
## Applying filters
- The **ImageFilter** module contains a number of pre-defined enhancement filters that can be used with the **filter()** method.
from PIL import ImageFilter
im = Image.open("lena.png")
im.show('im')
im.save('filter/orig.png')
out = im.filter(ImageFilter.DETAIL)
out = out.filter(ImageFilter.DETAIL)
out = out.filter(ImageFilter.DETAIL)
out.show()
out.save('filter/out.png')
## Point Operations
## Applying point transforms
```
# multiply each pixel by 1.2
im = Image.open("lena.png")
im.save('point/orig.png')
out = im.point(lambda i: i * 1.2)
out.save('point/out.png')
```
- Using the above technique, you can quickly apply any simple expression to an image. You can also combine the point() and paste() methods to selectively modify an image:
## Processing individual bands
```
im = Image.open("lena.png")
# split the image into individual bands
source = im.split()
R, G, B = 0, 1, 2
# select regions where red is less than 100
mask = source[R].point(lambda i: i < 100 and 255) # if i < 100 returns 255 else returns false(0)
# process the green band
out = source[G].point(lambda i: i * 0.7)
# paste the processed band back, but only where red was < 100
source[G].paste(out, None, mask) # mask is just filtering here
# build a new multiband image
im = Image.merge(im.mode, source)
im.show()
# here we are reducing the green where red's intensity value is less than 100
```
- Python only evaluates the portion of a logical expression as is necessary to determine the outcome, and returns the last value examined as the result of the expression. So if the expression above is false (0), Python does not look at the second operand, and thus returns 0. Otherwise, it returns 255.
## Enhancement
```
from PIL import Image
from PIL import ImageEnhance
im = Image.open("lena.png")
enh = ImageEnhance.Contrast(im)
enh.enhance(1.3).show("30% more contrast")
```
## Image sequences
## Reading sequences
```
from PIL import Image
im = Image.open("animation.gif")
im.seek(1) # skip to the second frame
try:
while 1:
im.seek(im.tell()+1)
im.show()
# do something to im
except EOFError as e:
print(e)
pass # end of sequence
```
## A sequence iterator class
```
from PIL import Image
im = Image.open("animation.gif")
class ImageSequence:
def __init__(self, im):
self.im = im
def __getitem__(self, ix):
try:
if ix:
self.im.seek(ix)
return self.im
except EOFError:
print('ddd')
raise IndexError # end of sequence
for frame in ImageSequence(im):
# ...do something to frame...
frame.show()
pass
```
## Postscript printing
## Drawing Postscript
```
from PIL import Image
from PIL import PSDraw
im = Image.open("lena.png")
title = "lena"
box = (1*72, 2*72, 7*72, 10*72) # in points
ps = PSDraw.PSDraw() # default is sys.stdout
ps.begin_document(title)
# draw the image (75 dpi)
ps.image(box, im, 75)
ps.rectangle(box)
# draw title
ps.setfont("HelveticaNarrow-Bold", 36)
ps.text((3*72, 4*72), title)
ps.end_document()
```
## More on reading images
## Reading from an open file
```
fp = open("lena.png", "rb")
im = Image.open(fp)
im.show()
```
## Reading from a string
```
!pip install StringIO
import StringIO
im = Image.open(StringIO.StringIO(buffer))
```
## Reading from a tar archive
```
from PIL import TarIO
fp = TarIO.TarIO("Imaging.tar", "lena.png")
im = Image.open(fp)
```
## Controlling the decoder
## Reading in draft mode
```
from __future__ import print_function
from PIL import Image
im = Image.open('lena.png')
print("original =", im.mode, im.size)
im.draft("L", (100, 100))
print("draft =", im.mode, im.size)
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
import scanpy as sc
import os
from sklearn.cluster import KMeans
from sklearn.cluster import AgglomerativeClustering
from sklearn.metrics.cluster import adjusted_rand_score
from sklearn.metrics.cluster import adjusted_mutual_info_score
from sklearn.metrics.cluster import homogeneity_score
import rpy2.robjects as robjects
from rpy2.robjects import pandas2ri
df_metrics = pd.DataFrame(columns=['ARI_Louvain','ARI_kmeans','ARI_HC',
'AMI_Louvain','AMI_kmeans','AMI_HC',
'Homogeneity_Louvain','Homogeneity_kmeans','Homogeneity_HC'])
workdir = './peaks_frequency_results/'
path_fm = os.path.join(workdir,'feature_matrices/')
path_clusters = os.path.join(workdir,'clusters/')
path_metrics = os.path.join(workdir,'metrics/')
os.system('mkdir -p '+path_clusters)
os.system('mkdir -p '+path_metrics)
metadata = pd.read_csv('../../input/metadata.tsv',sep='\t',index_col=0)
num_clusters = len(np.unique(metadata['label']))
print(num_clusters)
files = [x for x in os.listdir(path_fm) if x.startswith('FM')]
len(files)
files
def getNClusters(adata,n_cluster,range_min=0,range_max=3,max_steps=20):
this_step = 0
this_min = float(range_min)
this_max = float(range_max)
while this_step < max_steps:
print('step ' + str(this_step))
this_resolution = this_min + ((this_max-this_min)/2)
sc.tl.louvain(adata,resolution=this_resolution)
this_clusters = adata.obs['louvain'].nunique()
print('got ' + str(this_clusters) + ' at resolution ' + str(this_resolution))
if this_clusters > n_cluster:
this_max = this_resolution
elif this_clusters < n_cluster:
this_min = this_resolution
else:
return(this_resolution, adata)
this_step += 1
print('Cannot find the number of clusters')
print('Clustering solution from last iteration is used:' + str(this_clusters) + ' at resolution ' + str(this_resolution))
for file in files:
file_split = file[:-4].split('_')
method = file_split[1]
print(method)
pandas2ri.activate()
readRDS = robjects.r['readRDS']
df_rds = readRDS(os.path.join(path_fm,file))
fm_mat = pandas2ri.ri2py(robjects.r['data.frame'](robjects.r['as.matrix'](df_rds)))
fm_mat.fillna(0,inplace=True)
fm_mat.columns = metadata.index
adata = sc.AnnData(fm_mat.T)
adata.var_names_make_unique()
adata.obs = metadata.loc[adata.obs.index,]
df_metrics.loc[method,] = ""
#Louvain
sc.pp.neighbors(adata, n_neighbors=15,use_rep='X')
# sc.tl.louvain(adata)
getNClusters(adata,n_cluster=num_clusters)
#kmeans
kmeans = KMeans(n_clusters=num_clusters, random_state=2019).fit(adata.X)
adata.obs['kmeans'] = pd.Series(kmeans.labels_,index=adata.obs.index).astype('category')
#hierachical clustering
hc = AgglomerativeClustering(n_clusters=num_clusters).fit(adata.X)
adata.obs['hc'] = pd.Series(hc.labels_,index=adata.obs.index).astype('category')
#clustering metrics
#adjusted rank index
ari_louvain = adjusted_rand_score(adata.obs['label'], adata.obs['louvain'])
ari_kmeans = adjusted_rand_score(adata.obs['label'], adata.obs['kmeans'])
ari_hc = adjusted_rand_score(adata.obs['label'], adata.obs['hc'])
#adjusted mutual information
ami_louvain = adjusted_mutual_info_score(adata.obs['label'], adata.obs['louvain'],average_method='arithmetic')
ami_kmeans = adjusted_mutual_info_score(adata.obs['label'], adata.obs['kmeans'],average_method='arithmetic')
ami_hc = adjusted_mutual_info_score(adata.obs['label'], adata.obs['hc'],average_method='arithmetic')
#homogeneity
homo_louvain = homogeneity_score(adata.obs['label'], adata.obs['louvain'])
homo_kmeans = homogeneity_score(adata.obs['label'], adata.obs['kmeans'])
homo_hc = homogeneity_score(adata.obs['label'], adata.obs['hc'])
df_metrics.loc[method,['ARI_Louvain','ARI_kmeans','ARI_HC']] = [ari_louvain,ari_kmeans,ari_hc]
df_metrics.loc[method,['AMI_Louvain','AMI_kmeans','AMI_HC']] = [ami_louvain,ami_kmeans,ami_hc]
df_metrics.loc[method,['Homogeneity_Louvain','Homogeneity_kmeans','Homogeneity_HC']] = [homo_louvain,homo_kmeans,homo_hc]
adata.obs[['louvain','kmeans','hc']].to_csv(os.path.join(path_clusters ,method + '_clusters.tsv'),sep='\t')
df_metrics.to_csv(path_metrics+'clustering_scores.csv')
df_metrics
```
|
github_jupyter
|
<img style="float: right; margin: 0px 0px 15px 15px;" src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSQt6eQo8JPYzYO4p6WmxLtccdtJ4X8WR6GzVVKbsMjyGvUDEn1mg" width="300px" height="100px" />
# Trabajando con opciones
Una opción puede negociarse en el mercado secundario por lo que es importante determinar su valor $V_t$ para cada tiempo $t\in [0, T]$. La ganancia que obtiene quién adquiere la opción se llama función de pago o "payoff" y claramente depende del valor del subyacente.
Hay una gran variedad de opciones en el mercado y éstas se clasiflcan según su función de pago y la forma en que pueden ejercerse. Las opciones que tienen como función de pago a
$$ P(S(t),t)=max\{S(T)-K,0\} \rightarrow \text{En el caso de Call}$$
$$ P(S(t),t)=max\{K-S(T),0\} \rightarrow \text{En el caso de Put}$$
se llaman opciones **Vainilla**, con $h:[0,\infty) \to [0,\infty)$.
La opción se llama **europea** si puede ejercerse sólo en la fecha de vencimiento.
Se dice que una opción es **americana** si puede ejercerse en cualquier momento antes o en la fecha de vencimiento.
Una opción compleja popular son las llamadas **opciones asiáticas** cuyos pagos dependen de todas las trayectorias del precio de los activos subyacentes. Las opciones cuyos pagos dependen de las trayectorias de los precios de los activos subyacentes se denominan opciones dependientes de la ruta.
Principalmente, se puede resumir que las dos razones con más peso de importancia para utilizar opciones son el **aseguramiento** y la **especulación**.
## Opciones Plan Vainilla: opción de compra y opción de venta europea
Una opción vainilla o estándar es una opción normal de compra o venta que no tiene características especiales o inusuales. Puede ser para tamaños y vencimientos estandarizados, y negociarse en un intercambio.
En comparación con otras estructuras de opciones, las opciones de vanilla no son sofisticadas o complicadas.
## 1. ¿Cómo descargar datos de opciones?
```
#importar los paquetes que se van a usar
import pandas as pd
import pandas_datareader.data as web
import numpy as np
import datetime
import matplotlib.pyplot as plt
import scipy.stats as st
import seaborn as sns
%matplotlib inline
#algunas opciones para Pandas
pd.set_option('display.notebook_repr_html', True)
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 78)
pd.set_option('precision', 3)
```
Usando el paquete `pandas_datareader` también podemos descargar datos de opciones. Por ejemplo, descarguemos los datos de las opciones cuyo activo subyacente son las acciones de Apple
```
aapl = web.YahooOptions('AAPL')
aapl_opt = aapl.get_all_data().reset_index()
aapl_opt.set_index('Expiry')
# aapl
```
Precio del activo subyacente
```
aapl_opt.Underlying_Price[0]
```
Datos de la opción
```
aapl_opt.loc[0, 'JSON']
```
### Conceptos claves
- El precio de la oferta ('bid') se refiere al precio más alto que un comprador pagará por un activo.
- El precio de venta ('ask') se refiere al precio más bajo que un vendedor aceptará por un activo.
- La diferencia entre estos dos precios se conoce como 'spread'; cuanto menor es el spread, mayor es la liquidez de la garantía dada.
- Liquidez: facilidad de convertir cierta opción en efectivo.
- La volatilidad implícita es el pronóstico del mercado de un probable movimiento en el precio de un valor.
- La volatilidad implícita aumenta en los mercados bajistas y disminuye cuando el mercado es alcista.
- El último precio ('lastprice') representa el precio al que ocurrió la última operación, de una opción dada.
Una vez tenemos la información, podemos consultar de qué tipo son las opciones
```
aapl_opt.loc[:, 'Type']
```
o en que fecha expiran
```
pd.set_option('display.max_rows', 10)
aapl_opt.loc[:, 'Expiry']
```
Por otra parte, podríamos querer consultar todas las opciones de compra (call) que expiran en cierta fecha (2020-06-19)
```
fecha1 = '2021-06-18'
fecha2 = '2022-09-16'
call06_f1 = aapl_opt.loc[(aapl_opt.Expiry== fecha1) & (aapl_opt.Type=='call')]
call06_f2 = aapl_opt.loc[(aapl_opt.Expiry== fecha2) & (aapl_opt.Type=='call')]
call06_f1
```
## 2. ¿Qué es la volatilidad implícita?
**Volatilidad:** desviación estándar de los rendimientos.
- ¿Cómo se calcula?
- ¿Para qué calcular la volatilidad?
- **Para valuar derivados**, por ejemplo **opciones**.
- Método de valuación de riesgo neutral (se supone que el precio del activo $S_t$ no se ve afectado por el riesgo de mercado).
Recorderis de cuantitativas:
1. Ecuación de Black-Scholes
$$ dS(t) = \mu S(t) + \sigma S(t)dW_t$$
2. Solución de la ecuación
El valor de una opción Europea de vainilla $V_t$ puede obtenerse por:
$$V_t = F(t,S_t)$$ donde

3. Opción de compra europea, suponiendo que los precios del activo son lognormales
4. Opción de venta europea, suponiendo que los precios del activo son lognormales
Entonces, ¿qué es la **volatilidad implícita**?
La volatilidad es una medida de la incertidumbre sobre el comportamiento futuro de un activo, que se mide habitualmente como la desviación típica de la rentabilidad de dicho activo.
Una volatilidad implícita es aquella que cuando se sustituye en la ecuación de Black-Scholes o en sus ampliaciones,proporciona el precio de mercado de la opción.
## Volatility smile
- Cuando las opciones con la misma fecha de vencimiento y el mismo activo subyacente, pero diferentes precios de ejercicio, se grafican por la volatilidad implícita, la tendencia es que ese gráfico muestre una sonrisa.
- La sonrisa muestra que las opciones más alejadas 'in- or out-of-the-money' tienen la mayor volatilidad implícita.
- No todas las opciones tendrán una sonrisa de volatilidad implícita. Las opciones de acciones a corto plazo y las opciones relacionadas con la moneda tienen más probabilidades de tener una sonrisa de volatilidad

> Fuente: https://www.investopedia.com/terms/v/volatilitysmile.asp
> ### Validar para la `fecha = 2020-06-19` y para la fecha `fecha = '2021-01-15'`
```
# para los call de la fecha 1
ax = call06_f1.set_index('Strike').loc[:, 'IV'].plot(figsize=(8,6))
ax.axvline(call06_f1.Underlying_Price.iloc[0], color='g');
# para los call de la fecha 2
ax = call06_f2.set_index('Strike').loc[:, 'IV'].plot(figsize=(8,6))
ax.axvline(call06_f2.Underlying_Price.iloc[0], color='g');
```
Analicemos ahora datos de los `put`
```
put06_f1 = aapl_opt.loc[(aapl_opt.Expiry==fecha1) & (aapl_opt.Type=='put')]
put06_f1
```
Para los `put` de la `fecha 1`
```
ax = put06_f1.set_index('Strike').loc[:, 'IV'].plot(figsize=(8,6))
ax.axvline(put06_f1.Underlying_Price.iloc[0], color='g')
```
Con lo que hemos aprendido, deberíamos ser capaces de crear una función que nos devuelva un `DataFrame` de `pandas` con los precios de cierre ajustados de ciertas compañías en ciertas fechas:
- Escribir la función a continuación
```
# Función para descargar precios de cierre ajustados:
def get_adj_closes(tickers, start_date=None, end_date=None):
# Fecha inicio por defecto (start_date='2010-01-01') y fecha fin por defecto (end_date=today)
# Descargamos DataFrame con todos los datos
closes = web.DataReader(name=tickers, data_source='yahoo', start=start_date, end=end_date)
# Solo necesitamos los precios ajustados en el cierre
closes = closes['Adj Close']
# Se ordenan los índices de manera ascendente
closes.sort_index(inplace=True)
return closes
```
- Obtener como ejemplo los precios de cierre de Apple del año pasado hasta la fecha. Graficar...
```
ticker = ['AAPL']
start_date = '2017-01-01'
closes_aapl = get_adj_closes(ticker, start_date)
closes_aapl.plot(figsize=(8,5));
plt.legend(ticker);
```
- Escribir una función que pasándole el histórico de precios devuelva los rendimientos logarítmicos:
```
def calc_daily_ret(closes):
return np.log(closes/closes.shift(1)).iloc[1:]
```
- Graficar...
```
ret_aapl = calc_daily_ret(closes_aapl)
ret_aapl.plot(figsize=(8,6));
```
También, descargar datos de opciones de Apple:
```
aapl = web.YahooOptions('AAPL')
aapl_opt = aapl.get_all_data().reset_index()
aapl_opt.set_index('Expiry').sort_index()
K = 110 # strike price
indice_opt = aapl_opt.loc[(aapl_opt.Type=='call') & (aapl_opt.Strike==K) & (aapl_opt.Expiry=='2021-01-15')]
indice_opt
i_opt= indice_opt.index
opcion_valuar = aapl_opt.loc[i_opt[0]]
opcion_valuar['JSON']
print('Precio del activo subyacente actual = ',opcion_valuar.Underlying_Price)
```
# Simulación de precios usando rendimiento simple y logarítmico



* Comenzaremos por suponer que los rendimientos son un p.e. estacionario que distribuyen $\mathcal{N}(\mu,\sigma)$.
## Rendimiento Simple
```
# Obtenemos el rendimiento simple
Ri = closes_aapl.pct_change(1).iloc[1:]
# Obtenemos su media y desviación estándar de los rendimientos
mu_R = Ri.mean()[0]
sigma_R = Ri.std()[0]
Ri
opcion_valuar.Expiry
from datetime import date
# Encontrar la fecha de hoy en formato timestamp
today = pd.Timestamp(date.today())
# Obtener fecha de cierre de la opción a valuar
expiry = opcion_valuar.Expiry
nscen = 10000
# Generar rangos de fechas de días hábiles
dates = pd.date_range(start=today, end=expiry, freq='B')
ndays = len(dates)
```
## Mostrar como simular precios usando los rendimientos
### 1. Usando rendimiento simple
```
# Simular los rendimientos
# Rendimiento diario
dt = 1
# Z ~ N(0,1) normal estándar (ndays, nscen)
Z = np.random.randn(ndays, nscen)
# Simulación normal de los rendimientos
Ri_dt = pd.DataFrame(mu_R * dt + Z * sigma_R * np.sqrt(dt), index=dates)
# Simulación del precio
S_0 = closes_aapl.iloc[-1,0]
S_T = S_0*(1+Ri_dt).cumprod()
# Se muestran los precios simulados con los precios descargados
pd.concat([closes_aapl, S_T.iloc[:, :10]]).plot(figsize=(8,6));
plt.title('Simulación de precios usando rendimiento simple');
```
### 2. Rendimiento Logarítmico
```
# Calcular rendimiento logarítmico
ri = calc_daily_ret(closes_aapl)
# Usando la media y desviación estándar de los rendimientos logarítmicos
mu_r = ri.mean()[0]
sigma_r = ri.std()[0]
# Simulación del rendimiento
Z = np.random.randn(ndays, nscen)
sim_ret_ri = pd.DataFrame(mu_r * dt + Z * sigma_r * np.sqrt(dt), index=dates )
# Simulación del precio
S_0 = closes_aapl.iloc[-1,0]
S_T2 = S_0*np.exp(sim_ret_ri.cumsum())
# Se muestran los precios simulados con los precios descargados
# pd.concat([closes_aapl,S_T2]).plot(figsize=(8,6));
# plt.title('Simulación de precios usando rendimiento logarítmico');
# from sklearn.metrics import mean_absolute_error
e1 = np.abs(S_T-S_T2).mean().mean()
e1
print('Las std usando rendimientos logarítmicos y simples son similares')
sigma_R,sigma_r
```
Con los precios simulados debemos de encontrar el valor de la opción según la función de pago correspondiente. Para este caso es:
$$
max(S_T - K,0)
$$
```
opcion_valuar['JSON']
```
## Valuación usando el modelo de Black and Scholes
Los supuestos que hicieron Black y Scholes cuando dedujeron su fórmula para la valoración de opciones fueron los siguientes:
1. El comportamiento del precio de la acción corresponde al modelo logarítmico normal, con $\mu$ y $\sigma$
constantes.
2. No hay costos de transición ni impuestos. Todos los títulos son perfectamente divisibles.
3. No hay dividendos sobre la acción durante la vida de la opción.
4. No hay oportunidades de arbitraje libres de riesgo.
5. La negociación de valores es continua.
6. Los inversionistas pueden adquirir u otorgar préstamos a la misma tasa de interés libre de riesgo.
7. La tasa de interés libre de riesgo a corto plazo, r, es constante.
Bajo los supuestos anteriores podemos presentar las **fórmulas de Black-Scholes** para calcular los precios de compra y de venta europeas sobre acciones que no pagan dividendos:
$$
\text{Valor actual de la opción} = V(S_0, T) = S_0 N(d_1) - K e^{-r*T} N(d_2)
$$
donde:
- $S_0$ = precio de la acción en el momento actual.
- $K$ = precio "de ejercicio" de la opción.
- $r$ = tasa de interés libre de riesgo.
- $T$ = tiempo que le resta de vida a la opción.
- $N(d)$ = función de distribución de la variable aleatoria normal con media nula y desviación típica unitaria
(probabilidad de que dicha variable sea menor o igual que d). Función de distribución de probabilidad acumulada.
- $\sigma$ = varianza por período de la tasa o tipo de rendimiento de la opción.
$$
d_1 = \frac{\ln{\frac{S_0}{K}} + (r + \sigma^2 / 2) T}{\sigma \sqrt{T}}, \quad d_2 = \frac{\ln{\frac{S_0}{K}} + (r - \sigma^2 / 2) T}{\sigma \sqrt{T}}
$$
**Nota**: observe que el __rendimiento esperado__ sobre la acción no se incluye en la ecuación de Black-Scholes. Hay un principio general conocido como valoración neutral al riesgo, el cual establece que cualquier título que depende de otros títulos negociados puede valorarse bajo el supuesto de que el mundo es neutral al riesgo. El resultado demuestra ser muy útil en la práctica. *En un mundo neutral al riesgo, el rendimiento esperado de todos los títulos es la tasa de interés libre de riesgo*, y la tasa de descuento correcta para los flujos de efectivo esperados también es la tasa de interés libre de riesgo.
El equivalente a la función de Black-Scholes (valuación de la opción) se puede demostrar que es:
$$
\text{Valor actual de la opción} = V(S_0, T) = E^*(e^{-rT} f(S_T)) = e^{-rT} E^*(f(S_T))
$$
donde
$f(S_T)$ representa la función de pago de la opción, que para el caso de un call europeo sería $f(S_T) = \max({S_T - K})$.
> Referencia: http://diposit.ub.edu/dspace/bitstream/2445/32883/1/Benito_el_modelo_de_Black_Sholes.pdf (página 20)
> Referencia 2: http://www.cmat.edu.uy/~mordecki/courses/upae/upae-curso.pdf (página 24)
- Hallar media y desviación estándar muestral de los rendimientos logarítmicos
```
mu = ret_aapl.mean()[0]
sigma = ret_aapl.std()[0]
mu, sigma
```
No se toma la media sino la tasa libre de riesgo
> Referencia: https://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/TextView.aspx?data=yield
```
# Tasa de bonos de 1 yr de fecha 11/24/20 -> 0.11%
r = 0.0011/360 # Tasa diaria
```
- Simularemos el tiempo de contrato desde `HOY` hasta la fecha de `Expiry`, 10 escenarios:
- Generar fechas
```
from datetime import date
today = pd.Timestamp(date.today())
expiry = opcion_valuar.Expiry
nscen = 10
dates = pd.date_range(start=today, periods = ndays)
ndays = len(dates)
```
- Generamos 10 escenarios de rendimientos simulados y guardamos en un dataframe
```
sim_ret = pd.DataFrame(sigma*np.random.randn(ndays,nscen)+r, index=dates)
sim_ret.cumsum()
# Las columnas son los escenarios y las filas son las días de contrato
```
- Con los rendimientos simulados, calcular los escenarios de precios respectivos:
```
S0 = closes_aapl.iloc[-1,0] # Condición inicial del precio a simular
sim_closes = S0*np.exp(sim_ret.cumsum())
sim_closes
```
- Graficar:
```
sim_closes.plot(figsize=(8,6));
# Se muestran los precios simulados con los precios descargados
pd.concat([closes_aapl,sim_closes]).plot(figsize=(8,6));
K = opcion_valuar.Strike
call = np.exp(-r*ndays) * np.fmax(S_T - K, 0).mean(axis=1)
call
opcion_valuar['JSON']
# sigma = 0.3064034204101562/np.sqrt(252)
# sigma
from datetime import date
Hoy = date.today()
# strike price
K = opcion_valuar['JSON']['strike']
# Fechas a simular
dates = pd.date_range(start= Hoy, periods = ndays, freq='B')
# Escenarios y número de días
ndays = len(dates)
nscen = 100000
# Condición inicial del precio a simular
S0 = closes_aapl.iloc[-1,0]
# simular rendimientos
sim_ret = pd.DataFrame(sigma*np.random.randn(ndays,nscen)+r,index=dates)
# Simular precios
sim_closes = S0*np.exp(sim_ret.cumsum())
# Frame con el valor del strike
strike = pd.DataFrame(K*np.ones([ndays,nscen]), index=dates)
# Valor del call europeo
call = pd.DataFrame({'Prima':np.exp(-r*ndays) \
*np.fmax(sim_closes-K, 0).mean(axis=1)}, index=dates)
call.plot();
```
La valuación de la opción es:
```
call.iloc[-1]
```
Intervalo de confianza del 99%
```
confianza = 0.99
sigma_est = sim_closes.iloc[-1].sem()
mean_est = call.iloc[-1].Prima
i2 = st.norm.interval(confianza, loc=mean_est, scale=sigma_est)
print(i2)
opcion_valuar['JSON']
```
## Precios simulados usando técnicas de reducción de varianza
```
# Usando muestreo estratificado----> #estratros = nscen
U = (np.arange(0,nscen)+np.random.rand(ndays,nscen))/nscen
Z = st.norm.ppf(U)
sim_ret2 = pd.DataFrame(sigma*Z+r,index=dates)
sim_closes2 = S0*np.exp(sim_ret.cumsum())
# Función de pago
strike = pd.DataFrame(K*np.ones([ndays,nscen]), index=dates)
call = pd.DataFrame({'Prima':np.exp(-r*ndays) \
*np.fmax(sim_closes2-strike,np.zeros([ndays,nscen])).T.mean()}, index=dates)
call.plot();
```
La valuación de la opción es:
```
call.iloc[-1]
```
Intervalo de confianza del 99%
```
confianza = 0.99
sigma_est = sim_closes2.iloc[-1].sem()
mean_est = call.iloc[-1].Prima
i2 = st.norm.interval(confianza, loc=mean_est, scale=sigma_est)
print(i2)
```
### Análisis de la distribución de los rendimientos
### Ajustando norm
```
ren = calc_daily_ret(closes_aapl) # rendimientos
y,x,des = plt.hist(ren['AAPL'],bins=50,density=True,label='Histograma rendimientos')
mu_fit,sd_fit = st.norm.fit(ren) # Se ajustan los parámetros de una normal
# Valores máximo y mínimo de los rendiemientos a generar
ren_max = max(x);ren_min = min(x)
# Vector de rendimientos generados
ren_gen = np.arange(ren_min,ren_max,0.001)
# Generación de la normal ajustado con los parámetros encontrados
curve_fit = st.norm.pdf(ren_gen,loc=mu_fit,scale=sd_fit)
plt.plot(ren_gen,curve_fit,label='Distribución ajustada')
plt.legend()
plt.show()
```
### Ajustando t
```
# rendimientos
ren = calc_daily_ret(closes_aapl)
# Histograma de los rendimientos
y, x, _ = plt.hist(ren['AAPL'], bins=50, density=True, label='Histograma rendimientos')
# Se ajustan los parámetros de una distribución
dist = 't'
params = getattr(st, dist).fit(ren.values)
# Generación de la pdf de la distribución ajustado con los parámetros encontrados
curve_fit = getattr(st, dist).pdf(x, *params)
plt.plot(x, curve_fit, label='Distribución ajustada')
plt.legend()
plt.show()
# Q-Q
st.probplot(ren['AAPL'], sparams=params[:-2], dist=dist, plot=plt);
```
## 3. Valuación usando simulación: uso del histograma de rendimientos
Todo el análisis anterior se mantiene. Solo cambia la forma de generar los números aleatorios para la simulación montecarlo.
Ahora, generemos un histograma de los rendimientos diarios para generar valores aleatorios de los rendimientos simulados.
- Primero, cantidad de días y número de escenarios de simulación
```
nscen = 10
```
- Del histograma anterior, ya conocemos las probabilidades de ocurrencia, lo que se llamó como variable `y`
```
prob = y/np.sum(y)
values = x[1:]
prob.sum()
```
- Con esto, generamos los números aleatorios correspondientes a los rendimientos (tantos como días por número de escenarios).
```
dates = pd.date_range(start=Hoy,end=opcion_valuar.Expiry, freq='B')
ndays = len(dates)
ret = np.random.choice(values, ndays*nscen, p=prob)
sim_ret_hist = pd.DataFrame(ret.reshape((ndays,nscen)),index=dates)
sim_ret_hist
sim_closes_hist = (closes_aapl.iloc[-1,0])*np.exp(sim_ret_hist.cumsum())
sim_closes_hist
sim_closes_hist.plot(figsize=(8,6),legend=False);
pd.concat([closes_aapl,sim_closes_hist]).plot(figsize=(8,6),legend=False);
plt.title('Simulación usando el histograma de los rendimientos')
(ret_aapl - ret_aapl.mean() + r).mean()
K = opcion_valuar['JSON']['strike']
ndays = len(dates)
nscen = 10000
# Histograma tomando la tasa libre de riesgo
freq, values = np.histogram(ret_aapl+r-mu, bins=2000)
prob = freq/np.sum(freq)
# Simulación de los rendimientos
ret = np.random.choice(values[1:], ndays*nscen, p=prob)
# Simulación de precios
sim_ret_hist = pd.DataFrame(ret.reshape((ndays,nscen)),index=dates)
sim_closes_hist = (closes_aapl.iloc[-1,0]) * np.exp(sim_ret_hist.cumsum())
sim_closes_hist
call_hist = pd.DataFrame({'Prima':np.exp(-r*ndays) \
*np.fmax(sim_closes_hist-K, 0).mean(axis=1)}, index=dates)
call_hist.plot();
call_hist.iloc[-1]
opcion_valuar['JSON']
```
Intervalo de confianza del 95%
```
confianza = 0.95
sigma_est = sim_closes_hist.iloc[-1].sem()
mean_est = call_hist.iloc[-1].Prima
i2 = st.norm.interval(confianza, loc=mean_est, scale=sigma_est)
print(i2)
```
# <font color = 'red'> Tarea: </font>
Replicar el procedimiento anterior para valoración de opciones 'call', pero en este caso para opciones tipo 'put'.
<script>
$(document).ready(function(){
$('div.prompt').hide();
$('div.back-to-top').hide();
$('nav#menubar').hide();
$('.breadcrumb').hide();
$('.hidden-print').hide();
});
</script>
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Esteban Jiménez Rodríguez and modified by Oscar Jaramillo Z.
</footer>
|
github_jupyter
|
# Mesh visualisation
We are now going to have a look at different mesh visualisation options. We are going to use the following mesh:
```
import discretisedfield as df
p1 = (0, 0, 0)
p2 = (100e-9, 50e-9, 20e-9)
n = (20, 10, 4)
region = df.Region(p1=p1, p2=p2)
mesh = df.Mesh(region=region, n=n)
```
Same as the region object, there are two main ways how we can visualise mesh in `discretisedfield`:
1. Using `matplotlib` (static 2D plots, usually with some tricks to make them look 3D)
2. Using `k3d` (interactive 3D plots)
All `matplotlib` method names start with `mpl`, whereas all `k3d` plots start with `k3d`. We will first have a look at simple plotting using both `matplotlib` and `k3d` and later look at how we can pass different parameters to change them.
## Basic plotting
To get a quick `matploltlib` "3D" plot of the mesh, we call `mpl`:
```
mesh.mpl()
```
`mpl` plots two cubic regions. The larger one corresponds to the region and the smaller one to the discretisation cell. Without passing any parameters to `mpl` function, some default settings are chosen. We can see that `matplotlib` is not good in showing the right proportions of the region. More precisely, we know that the region is much thinner in the z-direction, but that is not the impression we get from the plot. This is the main disadvatage of `mpl`.
Now, we can ask our region object for an interactive `k3d` plot:
```
# NBVAL_IGNORE_OUTPUT
mesh.k3d()
```
Similar to the `mpl` plot, we can see the region as well as the discretisation cell in this plot. This can be useful to get an impression of the discretisation cell size with respect to the region we discretise. `k3d` plot is an interactive plot, which we can zoom, rotate, etc. In addition, a small contol panel is shown in the top-right corner, where we can modify some of the plot's properties.
## Advanced plotting
Here we explore what parameters we can pass to `mpl` and `k3d` functions. Let us start with `mpl`.
### `mpl`
The default plot is:
```
mesh.mpl()
```
If we want to change the figure size, we can pass `figsize` parameter. Its value must be a lenth-2 tuple, with the first element being the size in the horizontal direction, whereas the second element is the size in the vertical direction.
```
region.mpl(figsize=(10, 5))
```
The color of the lines depicting the region and the discretisation cell we can choose by passing `color` argument. `color` must be a lenght-2 tuple which consists of valid `matplotlib` colours. For instance, it can be a pair of RGB hex-strings ([online converter](http://www.javascripter.net/faq/rgbtohex.htm)). The first element is `color` is the colour of the region, whereas the second element is the colour of the discretisation cell.
```
mesh.mpl(color=('#9400D3', '#0000FF'))
```
`discretisedfield` automatically chooses the SI prefix (nano, micro, etc.) it is going to use to divide the axes with and show those units on the axes. Sometimes (e.g. for thin films), `discretisedfield` does not choose the SI prefix we expected. In those cases, we can explicitly pass it using `multiplier` argument. ``multiplier`` can be passed as $10^{n}$, where $n$ is a multiple of 3 (..., -6, -3, 0, 3, 6,...). For instance, if `multiplier=1e-6` is passed, all axes will be divided by $1\,\mu\text{m}$ and $\mu\text{m}$ units will be used as axis labels.
```
mesh.mpl(multiplier=1e-6)
```
If we want to save the plot, we pass `filename`, mesh plot is going to be shown and the plot saved in our working directory as a PDF.
```
mesh.mpl(filename='my-mesh-plot.pdf')
```
`mpl` mesh plot is based on [`matplotlib.pyplot.plot` function](https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.plot.html). Therefore, any parameter accepted by it can be passed. For instance:
```
mesh.mpl(marker='o', linestyle='dashed')
```
Finally, we show how to expose the axes on which the mesh is plotted, so that we can customise them. We do that by creating the axes ourselves and then passing them to `mpl` function.
```
import matplotlib.pyplot as plt
# Create the axes
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(111, projection='3d')
# Add the region to the axes
mesh.mpl(ax=ax)
# Customise the axes
ax.set_xlabel('length')
ax.set_ylabel('width')
ax.set_zlabel('height')
```
This way, by exposing the axes and passing any allowed `matplotlib.pyplot.plot` argument, we can customise the plot any way we like (as long as it is allowed by `matplotlib`).
### `k3d`
Default k3d plot is:
```
# NBVAL_IGNORE_OUTPUT
mesh.k3d()
```
If we want to change the color, we can pass `color` argument. It is a length-2 tuple of integer colours. The first number in the tuple is the colour of the region, whereas the second colour is the colour of the discretisation cell.
```
# NBVAL_IGNORE_OUTPUT
mesh.k3d(color=(754321, 123456))
```
Similar to the `mpl` plot, we can change the axes multiplier.
```
# NBVAL_IGNORE_OUTPUT
mesh.k3d(multiplier=1e-6)
```
`k3d` plot is based on [k3d.voxels](https://k3d-jupyter.org/k3d.html#k3d.factory.voxels), so any parameter accepted by it can be passed. For instance:
```
# NBVAL_IGNORE_OUTPUT
mesh.k3d(opacity=0.2)
```
We can also expose `k3d.Plot` object and customise it.
```
# NBVAL_IGNORE_OUTPUT
import k3d
# Expose plot object
plot = k3d.plot()
plot.display()
# Add region to the plot
mesh.k3d(plot=plot)
# Customise the plot
plot.axes = [r'\text{length}', r'\text{width}', r'\text{height}']
```
This way, we can modify the plot however we want (as long as `k3d` allows it).
|
github_jupyter
|
# Hello World Text Detection
A very basic introduction to OpenVINO that shows how to do text detection on a given IR model.
We use the [horizontal-text-detection-0001](https://docs.openvinotoolkit.org/latest/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/). It detects texts in images and returns blob of data in shape of [100, 5]. For each detection description has format [x_min, y_min, x_max, y_max, conf].
## Imports
```
import numpy as np
import cv2
import matplotlib.pyplot as plt
import numpy as np
from openvino.inference_engine import IECore
```
## Load the Model
```
ie = IECore()
net = ie.read_network(
model="model/horizontal-text-detection-0001.xml",
weights="model/horizontal-text-detection-0001.bin",
)
exec_net = ie.load_network(net, "CPU")
output_layer_ir = next(iter(exec_net.outputs))
input_layer_ir = next(iter(exec_net.input_info))
```
## Load an Image
```
# Text detection models expects image in BGR format
image = cv2.imread("data/intel_rnb.jpg")
# N,C,H,W = batch size, number of channels, height, width
N, C, H, W = net.input_info[input_layer_ir].tensor_desc.dims
# Resize image to meet network expected input sizes
resized_image = cv2.resize(image, (W, H))
# Reshape to network input shape
input_image = np.expand_dims(resized_image.transpose(2, 0, 1), 0)
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB));
```
## Do Inference
```
result = exec_net.infer(inputs={input_layer_ir: input_image})
# Extract list of boxes from results
boxes = result["boxes"]
# Remove zero only boxes
boxes = boxes[~np.all(boxes == 0, axis=1)]
```
## Visualize Results
```
# For each detection, the description has the format: [x_min, y_min, x_max, y_max, conf]
# Image passed here is in BGR format with changed width and height. To display it in colors expected by matplotlib we use cvtColor funtion
def convert_result_to_image(bgr_image, resized_image, boxes, threshold=0.3, conf_labels=True):
# Helper function to multiply shape by ratio
def multiply_by_ratio(ratio_x, ratio_y, box):
return [
max(shape * ratio_y, 10) if idx % 2 else shape * ratio_x
for idx, shape in enumerate(box[:-1])
]
# Define colors for boxes and descriptions
colors = {"red": (255, 0, 0), "green": (0, 255, 0)}
# Fetch image shapes to calculate ratio
(real_y, real_x), (resized_y, resized_x) = image.shape[:2], resized_image.shape[:2]
ratio_x, ratio_y = real_x / resized_x, real_y / resized_y
# Convert base image from bgr to rgb format
rgb_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2RGB)
# Iterate through non-zero boxes
for box in boxes:
# Pick confidence factor from last place in array
conf = box[-1]
if conf > threshold:
# Convert float to int and multiply position of each box by x and y ratio
(x_min, y_min, x_max, y_max) = map(int, multiply_by_ratio(ratio_x, ratio_y, box))
# Draw box based on position, parameters in rectangle function are: image, start_point, end_point, color, thickness
rgb_image = cv2.rectangle(rgb_image, (x_min, y_min), (x_max, y_max), colors["green"], 3)
# Add text to image based on position and confidence, parameters in putText function are: image, text, bottomleft_corner_textfield, font, font_scale, color, thickness, line_type
if conf_labels:
rgb_image = cv2.putText(
rgb_image,
f"{conf:.2f}",
(x_min, y_min - 10),
cv2.FONT_HERSHEY_SIMPLEX,
0.8,
colors["red"],
1,
cv2.LINE_AA,
)
return rgb_image
plt.figure(figsize=(10, 6))
plt.axis("off")
plt.imshow(convert_result_to_image(image, resized_image, boxes, conf_labels=False));
```
|
github_jupyter
|
<h3>Sin Cython</h3>
<p>Este programa genera $N$ enteros aleatorios entre $1$ y $M$, y una vez obtenidos los eleva al cuadrado y devuelve la suma de los cuadrados. Por tanto, calcula el cuadrado de la longitud de un vector aleatorio con coordenadas enteros en el intervalo $[1,M]$.</p>
```
def cuadrados(N,M):
res = 0
for muda in xrange(N):
x = randint(1,M)
res += x*x
return res
for n in srange(3,8):
%time A = cuadrados(10^n,10^6)
```
<h3>Con Cython</h3>
<p>Mismo cálculo:</p>
```
%%cython
import math
import random
def cuadrados_cy(long long N, long long M):
cdef long long res = 0
cdef long long muda
cdef long long x
for muda in xrange(N):
x = random.randint(1,M)
res += math.pow(x,2)
return res
for n in srange(3,8):
%time A = cuadrados_cy(10^n,10^6)
```
<h3>Optimizando el cálculo de números aleatorios:</h3>
```
%%cython
cdef extern from 'gsl/gsl_rng.h':
ctypedef struct gsl_rng_type:
pass
ctypedef struct gsl_rng:
pass
gsl_rng_type *gsl_rng_mt19937
gsl_rng *gsl_rng_alloc(gsl_rng_type * T)
cdef gsl_rng *r = gsl_rng_alloc(gsl_rng_mt19937)
cdef extern from 'gsl/gsl_randist.h':
long int uniform 'gsl_rng_uniform_int'(gsl_rng * r, unsigned long int n)
def main():
cdef int n
n = uniform(r,1000000)
return n
cdef long f(long x):
return x**2
import random
def cuadrados_cy2(int N):
cdef long res = 0
cdef int muda
for muda in range(N):
res += f(main())
return res
for n in srange(3,8):
%time A = cuadrados_cy2(10^n)
```
<h3>Problema similar sin números aleatorios:</h3>
```
%%cython
def cuadrados_cy3(long long int N):
cdef long long int res = 0
cdef long long int k
for k in range(N):
res += k**2
return res
for n in srange(3,8):
%time A = cuadrados_cy3(10^n)
def cuadrados5(N):
res = 0
for k in range(N):
res += k**2
return res
for n in srange(3,8):
%time A = cuadrados5(10^n)
```
<p>Hemos comprobado, de dos maneras, que es en la generación de los números aleatorios donde Python pasa la mayor parte del tiempo en este cálculo. Si optimizamos esa parte, usando una librería en C, o simplemente la suprimimos, el cálculo es mucho más rápido. Cython pierde muchísima eficiencia cuando debe ejecutar funciones de Python que son mucho más lentas que las correspondientes funciones en C.</p>
|
github_jupyter
|
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Automated Machine Learning
_**Regression with Deployment using Hardware Performance Dataset**_
## Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Data](#Data)
1. [Train](#Train)
1. [Results](#Results)
1. [Test](#Test)
1. [Acknowledgements](#Acknowledgements)
## Introduction
In this example we use the Hardware Performance Dataset to showcase how you can use AutoML for a simple regression problem. The Regression goal is to predict the performance of certain combinations of hardware parts.
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace.
In this notebook you will learn how to:
1. Create an `Experiment` in an existing `Workspace`.
2. Configure AutoML using `AutoMLConfig`.
3. Train the model using local compute.
4. Explore the results.
5. Test the best fitted model.
## Setup
As part of the setup you have already created an Azure ML Workspace object. For AutoML you will need to create an Experiment object, which is a named object in a Workspace used to run experiments.
```
import logging
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import os
from sklearn.model_selection import train_test_split
import azureml.dataprep as dprep
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the experiment and specify the project folder.
experiment_name = 'automl-regression-hardware'
project_folder = './sample_projects/automl-remote-regression'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
```
## Create or Attach existing AmlCompute
You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.
#### Creation of AmlCompute takes approximately 5 minutes.
If the AmlCompute with that name is already in your workspace this code will skip the creation process.
As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.
```
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
# Choose a name for your cluster.
amlcompute_cluster_name = "automlcl"
found = False
# Check if this compute target already exists in the workspace.
cts = ws.compute_targets
if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[amlcompute_cluster_name]
if not found:
print('Creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", # for GPU, use "STANDARD_NC6"
#vm_priority = 'lowpriority', # optional
max_nodes = 6)
# Create the cluster.
compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min_node_count is provided, it will use the scale settings for the cluster.
compute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)
# For a more detailed view of current AmlCompute status, use get_status().
```
# Data
Here load the data in the get_data script to be utilized in azure compute. To do this, first load all the necessary libraries and dependencies to set up paths for the data and to create the conda_run_config.
```
if not os.path.isdir('data'):
os.mkdir('data')
if not os.path.exists(project_folder):
os.makedirs(project_folder)
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
import pkg_resources
# create a new RunConfig object
conda_run_config = RunConfiguration(framework="python")
# Set compute target to AmlCompute
conda_run_config.target = compute_target
conda_run_config.environment.docker.enabled = True
conda_run_config.environment.docker.base_image = azureml.core.runconfig.DEFAULT_CPU_IMAGE
dprep_dependency = 'azureml-dataprep==' + pkg_resources.get_distribution("azureml-dataprep").version
cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]', dprep_dependency], conda_packages=['numpy'])
conda_run_config.environment.python.conda_dependencies = cd
```
### Load Data
Here create the script to be run in azure compute for loading the data, load the hardware dataset into the X and y variables. Next split the data using train_test_split and return X_train and y_train for training the model.
```
data = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/machineData.csv"
dflow = dprep.read_csv(data, infer_column_types=True)
dflow.get_profile()
X = dflow.drop_columns(columns=['ERP'])
y = dflow.keep_columns(columns=['ERP'], validate_column_exists=True)
X_train, X_test = X.random_split(percentage=0.8, seed=223)
y_train, y_test = y.random_split(percentage=0.8, seed=223)
dflow.head()
```
## Train
Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.
|Property|Description|
|-|-|
|**task**|classification or regression|
|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics: <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>|
|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|
|**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.|
|**n_cross_validations**|Number of cross validation splits.|
|**X**|(sparse) array-like, shape = [n_samples, n_features]|
|**y**|(sparse) array-like, shape = [n_samples, ], targets values.|
|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|
**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)
##### If you would like to see even better results increase "iteration_time_out minutes" to 10+ mins and increase "iterations" to a minimum of 30
```
automl_settings = {
"iteration_timeout_minutes": 5,
"iterations": 10,
"n_cross_validations": 5,
"primary_metric": 'spearman_correlation',
"preprocess": True,
"max_concurrent_iterations": 5,
"verbosity": logging.INFO,
}
automl_config = AutoMLConfig(task = 'regression',
debug_log = 'automl_errors_20190417.log',
path = project_folder,
run_configuration=conda_run_config,
X = X_train,
y = y_train,
**automl_settings
)
remote_run = experiment.submit(automl_config, show_output = False)
remote_run
```
## Results
#### Widget for Monitoring Runs
The widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.
**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
```
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
# Wait until the run finishes.
remote_run.wait_for_completion(show_output = True)
```
## Retrieve All Child Runs
You can also use SDK methods to fetch all the child runs and see individual metrics that we log.
```
children = list(remote_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
```
## Retrieve the Best Model
Below we select the best pipeline from our iterations. The get_output method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on get_output allow you to retrieve the best run and fitted model for any logged metric or for a particular iteration.
```
best_run, fitted_model = remote_run.get_output()
print(best_run)
print(fitted_model)
```
#### Best Model Based on Any Other Metric
Show the run and the model that has the smallest `root_mean_squared_error` value (which turned out to be the same as the one with largest `spearman_correlation` value):
```
lookup_metric = "root_mean_squared_error"
best_run, fitted_model = remote_run.get_output(metric = lookup_metric)
print(best_run)
print(fitted_model)
iteration = 3
third_run, third_model = remote_run.get_output(iteration = iteration)
print(third_run)
print(third_model)
```
## Register the Fitted Model for Deployment
If neither metric nor iteration are specified in the register_model call, the iteration with the best primary metric is registered.
```
description = 'AutoML Model'
tags = None
model = remote_run.register_model(description = description, tags = tags)
print(remote_run.model_id) # This will be written to the script file later in the notebook.
```
### Create Scoring Script
The scoring script is required to generate the image for deployment. It contains the code to do the predictions on input data.
```
%%writefile score.py
import pickle
import json
import numpy
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
def run(rawdata):
try:
data = json.loads(rawdata)['data']
data = numpy.array(data)
result = model.predict(data)
except Exception as e:
result = str(e)
return json.dumps({"error": result})
return json.dumps({"result":result.tolist()})
```
### Create a YAML File for the Environment
To ensure the fit results are consistent with the training results, the SDK dependency versions need to be the same as the environment that trains the model. Details about retrieving the versions can be found in notebook [12.auto-ml-retrieve-the-training-sdk-versions](12.auto-ml-retrieve-the-training-sdk-versions.ipynb).
```
dependencies = remote_run.get_run_sdk_dependencies(iteration = 1)
for p in ['azureml-train-automl', 'azureml-sdk', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-sdk[automl]'])
conda_env_file_name = 'myenv.yml'
myenv.save_to_file('.', conda_env_file_name)
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-sdk']))
# Substitute the actual model id in the script file.
script_file_name = 'score.py'
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', remote_run.model_id))
```
### Create a Container Image
Next use Azure Container Instances for deploying models as a web service for quickly deploying and validating your model
or when testing a model that is under development.
```
from azureml.core.image import Image, ContainerImage
image_config = ContainerImage.image_configuration(runtime= "python",
execution_script = script_file_name,
conda_file = conda_env_file_name,
tags = {'area': "digits", 'type': "automl_regression"},
description = "Image for automl regression sample")
image = Image.create(name = "automlsampleimage",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
if image.creation_state == 'Failed':
print("Image build log at: " + image.image_build_log_uri)
```
### Deploy the Image as a Web Service on Azure Container Instance
Deploy an image that contains the model and other assets needed by the service.
```
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
tags = {'area': "digits", 'type': "automl_regression"},
description = 'sample service for Automl Regression')
from azureml.core.webservice import Webservice
aci_service_name = 'automl-sample-hardware'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
```
### Delete a Web Service
Deletes the specified web service.
```
#aci_service.delete()
```
### Get Logs from a Deployed Web Service
Gets logs from a deployed web service.
```
#aci_service.get_logs()
```
## Test
Now that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values.
```
X_test = X_test.to_pandas_dataframe()
y_test = y_test.to_pandas_dataframe()
y_test = np.array(y_test)
y_test = y_test[:,0]
X_train = X_train.to_pandas_dataframe()
y_train = y_train.to_pandas_dataframe()
y_train = np.array(y_train)
y_train = y_train[:,0]
```
##### Predict on training and test set, and calculate residual values.
```
y_pred_train = fitted_model.predict(X_train)
y_residual_train = y_train - y_pred_train
y_pred_test = fitted_model.predict(X_test)
y_residual_test = y_test - y_pred_test
```
### Calculate metrics for the prediction
Now visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values
from the trained model that was returned.
```
%matplotlib inline
from sklearn.metrics import mean_squared_error, r2_score
# Set up a multi-plot chart.
f, (a0, a1) = plt.subplots(1, 2, gridspec_kw = {'width_ratios':[1, 1], 'wspace':0, 'hspace': 0})
f.suptitle('Regression Residual Values', fontsize = 18)
f.set_figheight(6)
f.set_figwidth(16)
# Plot residual values of training set.
a0.axis([0, 360, -200, 200])
a0.plot(y_residual_train, 'bo', alpha = 0.5)
a0.plot([-10,360],[0,0], 'r-', lw = 3)
a0.text(16,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_train, y_pred_train))), fontsize = 12)
a0.text(16,140,'R2 score = {0:.2f}'.format(r2_score(y_train, y_pred_train)),fontsize = 12)
a0.set_xlabel('Training samples', fontsize = 12)
a0.set_ylabel('Residual Values', fontsize = 12)
# Plot residual values of test set.
a1.axis([0, 90, -200, 200])
a1.plot(y_residual_test, 'bo', alpha = 0.5)
a1.plot([-10,360],[0,0], 'r-', lw = 3)
a1.text(5,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred_test))), fontsize = 12)
a1.text(5,140,'R2 score = {0:.2f}'.format(r2_score(y_test, y_pred_test)),fontsize = 12)
a1.set_xlabel('Test samples', fontsize = 12)
a1.set_yticklabels([])
plt.show()
%matplotlib notebook
test_pred = plt.scatter(y_test, y_pred_test, color='')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
```
## Acknowledgements
This Predicting Hardware Performance Dataset is made available under the CC0 1.0 Universal (CC0 1.0) Public Domain Dedication License: https://creativecommons.org/publicdomain/zero/1.0/. Any rights in individual contents of the database are licensed under the CC0 1.0 Universal (CC0 1.0) Public Domain Dedication License: https://creativecommons.org/publicdomain/zero/1.0/ . The dataset itself can be found here: https://www.kaggle.com/faizunnabi/comp-hardware-performance and https://archive.ics.uci.edu/ml/datasets/Computer+Hardware
_**Citation Found Here**_
|
github_jupyter
|
```
%matplotlib inline
```
Word Embeddings: Encoding Lexical Semantics
===========================================
Word embeddings are dense vectors of real numbers, one per word in your
vocabulary. In NLP, it is almost always the case that your features are
words! But how should you represent a word in a computer? You could
store its ascii character representation, but that only tells you what
the word *is*, it doesn't say much about what it *means* (you might be
able to derive its part of speech from its affixes, or properties from
its capitalization, but not much). Even more, in what sense could you
combine these representations? We often want dense outputs from our
neural networks, where the inputs are $|V|$ dimensional, where
$V$ is our vocabulary, but often the outputs are only a few
dimensional (if we are only predicting a handful of labels, for
instance). How do we get from a massive dimensional space to a smaller
dimensional space?
How about instead of ascii representations, we use a one-hot encoding?
That is, we represent the word $w$ by
\begin{align}\overbrace{\left[ 0, 0, \dots, 1, \dots, 0, 0 \right]}^\text{|V| elements}\end{align}
where the 1 is in a location unique to $w$. Any other word will
have a 1 in some other location, and a 0 everywhere else.
There is an enormous drawback to this representation, besides just how
huge it is. It basically treats all words as independent entities with
no relation to each other. What we really want is some notion of
*similarity* between words. Why? Let's see an example.
Suppose we are building a language model. Suppose we have seen the
sentences
* The mathematician ran to the store.
* The physicist ran to the store.
* The mathematician solved the open problem.
in our training data. Now suppose we get a new sentence never before
seen in our training data:
* The physicist solved the open problem.
Our language model might do OK on this sentence, but wouldn't it be much
better if we could use the following two facts:
* We have seen mathematician and physicist in the same role in a sentence. Somehow they
have a semantic relation.
* We have seen mathematician in the same role in this new unseen sentence
as we are now seeing physicist.
and then infer that physicist is actually a good fit in the new unseen
sentence? This is what we mean by a notion of similarity: we mean
*semantic similarity*, not simply having similar orthographic
representations. It is a technique to combat the sparsity of linguistic
data, by connecting the dots between what we have seen and what we
haven't. This example of course relies on a fundamental linguistic
assumption: that words appearing in similar contexts are related to each
other semantically. This is called the `distributional
hypothesis <https://en.wikipedia.org/wiki/Distributional_semantics>`__.
Getting Dense Word Embeddings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
How can we solve this problem? That is, how could we actually encode
semantic similarity in words? Maybe we think up some semantic
attributes. For example, we see that both mathematicians and physicists
can run, so maybe we give these words a high score for the "is able to
run" semantic attribute. Think of some other attributes, and imagine
what you might score some common words on those attributes.
If each attribute is a dimension, then we might give each word a vector,
like this:
\begin{align}q_\text{mathematician} = \left[ \overbrace{2.3}^\text{can run},
\overbrace{9.4}^\text{likes coffee}, \overbrace{-5.5}^\text{majored in Physics}, \dots \right]\end{align}
\begin{align}q_\text{physicist} = \left[ \overbrace{2.5}^\text{can run},
\overbrace{9.1}^\text{likes coffee}, \overbrace{6.4}^\text{majored in Physics}, \dots \right]\end{align}
Then we can get a measure of similarity between these words by doing:
\begin{align}\text{Similarity}(\text{physicist}, \text{mathematician}) = q_\text{physicist} \cdot q_\text{mathematician}\end{align}
Although it is more common to normalize by the lengths:
\begin{align}\text{Similarity}(\text{physicist}, \text{mathematician}) = \frac{q_\text{physicist} \cdot q_\text{mathematician}}
{\| q_\text{\physicist} \| \| q_\text{mathematician} \|} = \cos (\phi)\end{align}
Where $\phi$ is the angle between the two vectors. That way,
extremely similar words (words whose embeddings point in the same
direction) will have similarity 1. Extremely dissimilar words should
have similarity -1.
You can think of the sparse one-hot vectors from the beginning of this
section as a special case of these new vectors we have defined, where
each word basically has similarity 0, and we gave each word some unique
semantic attribute. These new vectors are *dense*, which is to say their
entries are (typically) non-zero.
But these new vectors are a big pain: you could think of thousands of
different semantic attributes that might be relevant to determining
similarity, and how on earth would you set the values of the different
attributes? Central to the idea of deep learning is that the neural
network learns representations of the features, rather than requiring
the programmer to design them herself. So why not just let the word
embeddings be parameters in our model, and then be updated during
training? This is exactly what we will do. We will have some *latent
semantic attributes* that the network can, in principle, learn. Note
that the word embeddings will probably not be interpretable. That is,
although with our hand-crafted vectors above we can see that
mathematicians and physicists are similar in that they both like coffee,
if we allow a neural network to learn the embeddings and see that both
mathematicians and physicists have a large value in the second
dimension, it is not clear what that means. They are similar in some
latent semantic dimension, but this probably has no interpretation to
us.
In summary, **word embeddings are a representation of the *semantics* of
a word, efficiently encoding semantic information that might be relevant
to the task at hand**. You can embed other things too: part of speech
tags, parse trees, anything! The idea of feature embeddings is central
to the field.
Word Embeddings in Pytorch
~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we get to a worked example and an exercise, a few quick notes
about how to use embeddings in Pytorch and in deep learning programming
in general. Similar to how we defined a unique index for each word when
making one-hot vectors, we also need to define an index for each word
when using embeddings. These will be keys into a lookup table. That is,
embeddings are stored as a $|V| \times D$ matrix, where $D$
is the dimensionality of the embeddings, such that the word assigned
index $i$ has its embedding stored in the $i$'th row of the
matrix. In all of my code, the mapping from words to indices is a
dictionary named word\_to\_ix.
The module that allows you to use embeddings is torch.nn.Embedding,
which takes two arguments: the vocabulary size, and the dimensionality
of the embeddings.
To index into this table, you must use torch.LongTensor (since the
indices are integers, not floats).
```
# Author: Robert Guthrie
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
word_to_ix = {"hello": 0, "world": 1}
embeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings
lookup_tensor_hello = torch.tensor([word_to_ix["hello"]], dtype=torch.long)
hello_embed = embeds(lookup_tensor_hello)
print("hello_embed: ", hello_embed)
lookup_tensor_world = torch.tensor([word_to_ix["world"]], dtype=torch.long)
world_embed = embeds(lookup_tensor_world)
print("worlds_embed: ", world_embed)
```
An Example: N-Gram Language Modeling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Recall that in an n-gram language model, given a sequence of words
$w$, we want to compute
\begin{align}P(w_i | w_{i-1}, w_{i-2}, \dots, w_{i-n+1} )\end{align}
Where $w_i$ is the ith word of the sequence.
In this example, we will compute the loss function on some training
examples and update the parameters with backpropagation.
```
CONTEXT_SIZE = 5
EMBEDDING_DIM = 10
# We will use Shakespeare Sonnet 2
test_sentence = """When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a totter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.""".split()
# we should tokenize the input, but we will ignore that for now
# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)
ngrams = [([test_sentence[i + j] for j in range(CONTEXT_SIZE)], test_sentence[i + CONTEXT_SIZE])
for i in range(len(test_sentence) - CONTEXT_SIZE)]
# trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])
# for i in range(len(test_sentence) - 2)]
print("the first 3 ngrams, just so you can see what they look like: ")
print(ngrams[:3])
print("the last 3 ngrams: ")
print(ngrams[-3:])
vocab = set(test_sentence)
word_to_ix = {word: i for i, word in enumerate(vocab)}
class NGramLanguageModeler(nn.Module):
def __init__(self, vocab_size, embedding_dim, context_size):
super(NGramLanguageModeler, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.linear1 = nn.Linear(context_size * embedding_dim, 128)
self.linear2 = nn.Linear(128, vocab_size)
def forward(self, inputs):
embeds = self.embeddings(inputs).view((1, -1))
out = F.relu(self.linear1(embeds))
out = self.linear2(out)
# print("out: ", out)
log_probs = F.log_softmax(out, dim=1)
# print("log probs: ", log_probs)
return log_probs
losses = []
loss_function = nn.NLLLoss()
model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)
optimizer = optim.SGD(model.parameters(), lr=0.001)
for epoch in range(1):
total_loss = 0
for context, target in ngrams:
# Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
# into integer indices and wrap them in tensors)
context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)
# Step 2. Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
model.zero_grad()
# Step 3. Run the forward pass, getting log probabilities over next
# words
log_probs = model(context_idxs)
# Step 4. Compute your loss function. (Again, Torch wants the target
# word wrapped in a tensor)
loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))
# Step 5. Do the backward pass and update the gradient
loss.backward()
optimizer.step()
# Get the Python number from a 1-element Tensor by calling tensor.item()
total_loss += loss.item()
losses.append(total_loss)
print("losses: ", losses)
print("The loss decreased every iteration over the training data!")
```
Exercise: Computing Word Embeddings: Continuous Bag-of-Words
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Continuous Bag-of-Words model (CBOW) is frequently used in NLP deep
learning. It is a model that tries to predict words given the context of
a few words before and a few words after the target word. This is
distinct from language modeling, since CBOW is not sequential and does
not have to be probabilistic. Typcially, CBOW is used to quickly train
word embeddings, and these embeddings are used to initialize the
embeddings of some more complicated model. Usually, this is referred to
as *pretraining embeddings*. It almost always helps performance a couple
of percent.
The CBOW model is as follows. Given a target word $w_i$ and an
$N$ context window on each side, $w_{i-1}, \dots, w_{i-N}$
and $w_{i+1}, \dots, w_{i+N}$, referring to all context words
collectively as $C$, CBOW tries to minimize
\begin{align}-\log p(w_i | C) = -\log \text{Softmax}(A(\sum_{w \in C} q_w) + b)\end{align}
where $q_w$ is the embedding for word $w$.
Implement this model in Pytorch by filling in the class below. Some
tips:
* Think about which parameters you need to define.
* Make sure you know what shape each operation expects. Use .view() if you need to
reshape.
```
CONTEXT_SIZE = 2 # 2 words to the left, 2 to the right
EMBEDDING_DIM = 10
raw_text = """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.""".split()
# By deriving a set from `raw_text`, we deduplicate the array
vocab = set(raw_text)
vocab_size = len(vocab)
word_to_ix = {word: i for i, word in enumerate(vocab)}
data = []
for i in range(2, len(raw_text) - 2):
context = [raw_text[i - 2], raw_text[i - 1],
raw_text[i + 1], raw_text[i + 2]]
target = raw_text[i]
data.append((context, target))
print(data[:5])
class CBOW(nn.Module):
def __init__(self, vocab_size, embedding_dim):
super(CBOW, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.linear = nn.Linear(embedding_dim, vocab_size)
def forward(self, inputs):
embeds = self.embeddings(inputs)
# print("embeds: ", embeds)
qsum = torch.sum(embeds, dim=0)
# print("qsum: ", qsum)
out = self.linear(qsum)
# print("out: ", out)
log_probs = F.log_softmax(out, dim=0)
# print("log probs: ", log_probs)
return log_probs
# create your model and train. here are some functions to help you make
# the data ready for use by your module
def make_context_vector(context, word_to_ix):
idxs = [word_to_ix[w] for w in context]
return torch.tensor(idxs, dtype=torch.long)
context_vector = make_context_vector(data[0][0], word_to_ix) # example
print("context vector: ", context_vector)
losses = []
loss_function = nn.NLLLoss()
model = CBOW(len(vocab), EMBEDDING_DIM)
optimizer = optim.SGD(model.parameters(), lr=0.001)
for epoch in range(10):
total_loss = 0
for context, target in data:
# Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
# into integer indices and wrap them in tensors)
# context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)
context_idxs = make_context_vector(context, word_to_ix)
# Step 2. Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
model.zero_grad()
# Step 3. Run the forward pass, getting log probabilities over next
# words
log_probs = model(context_idxs)
# Step 4. Compute your loss function. (Again, Torch wants the target
# word wrapped in a tensor)
# loss_function requires a minibatch index - here we have only 1
loss = loss_function(log_probs.unsqueeze(0), torch.tensor([word_to_ix[target]], dtype=torch.long))
# Step 5. Do the backward pass and update the gradient
loss.backward()
optimizer.step()
# Get the Python number from a 1-element Tensor by calling tensor.item()
total_loss += loss.item()
losses.append(total_loss)
print(losses) # The loss decreased every iteration over the training data!
```
|
github_jupyter
|
This notebook will hopefully contain timeseries that plot continuous data from moorings alongside model output.
```
import sys
sys.path.append('/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools')
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
import netCDF4 as nc
import xarray as xr
import datetime as dt
from salishsea_tools import evaltools as et, viz_tools
import gsw
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import matplotlib.dates as mdates
import cmocean as cmo
import scipy.interpolate as sinterp
import pickle
import cmocean
import json
import f90nml
import Keegan_eval_tools as ket
from collections import OrderedDict
from matplotlib.colors import LogNorm
fs=16
mpl.rc('xtick', labelsize=fs)
mpl.rc('ytick', labelsize=fs)
mpl.rc('legend', fontsize=fs)
mpl.rc('axes', titlesize=fs)
mpl.rc('axes', labelsize=fs)
mpl.rc('figure', titlesize=fs)
mpl.rc('font', size=fs)
mpl.rc('font', family='sans-serif', weight='normal', style='normal')
import warnings
#warnings.filterwarnings('ignore')
from IPython.display import Markdown, display
%matplotlib inline
saveloc='/ocean/kflanaga/MEOPAR/savedData/King_CountyData/hourly_pickle_files'
year=2019
Mooring='PointWilliams'
# Parameters
saveloc = "/ocean/kflanaga/MEOPAR/savedData/King_CountyData/hourly_pickle_files"
year = 2018
Mooring = "PointWilliams"
##### Loading in pickle file data
with open(os.path.join(saveloc,f'hourly_data_{Mooring}_{year}.pkl'),'rb') as hh:
data=pickle.load(hh)
grid=xr.open_mfdataset(f'/ocean/kflanaga/MEOPAR/savedData/201905_grid_data/ts_HC201905_{year}_{Mooring}.nc')
%%time
tt=grid.time_centered
vot=grid.votemper.isel(deptht=0,y=0,x=0)
vos=grid.vosaline.isel(deptht=0,y=0,x=0)
obsvar='CT'
fig,ax=plt.subplots(1,1,figsize=(14,7))
ps=[]
p0,=ax.plot(data['dtUTC'],data[obsvar],'.',color='blue',label=f'Observed ')
ps.append(p0)
p0,=ax.plot(tt,vot,'-',color='red',label='Modeled')
ps.append(p0)
ax.legend(handles=ps)
ax.set_ylabel(f'{obsvar}')
ax.set_xlabel('Date')
ax.set_title('Temperature timeseries')
plt.setp(ax.get_xticklabels(), rotation=30, horizontalalignment='right')
M = 15
xticks = mpl.ticker.MaxNLocator(M)
ax.xaxis.set_major_locator(xticks)
yearsFmt = mdates.DateFormatter('%d %b %y')
ax.xaxis.set_major_formatter(yearsFmt)
obsvar='SA'
fig,ax=plt.subplots(1,1,figsize=(14,7))
ps=[]
p0,=ax.plot(data['dtUTC'],data[obsvar],'.',color='blue',label=f'Observed')
ps.append(p0)
p0,=ax.plot(tt,vos,'-',color='red',label='Modeled')
ps.append(p0)
ax.legend(handles=ps)
ax.set_ylabel(f'{obsvar}')
ax.set_xlabel('Date')
ax.set_title('Salinity timeseries')
plt.setp(ax.get_xticklabels(), rotation=30, horizontalalignment='right')
M = 15
xticks = mpl.ticker.MaxNLocator(M)
ax.xaxis.set_major_locator(xticks)
yearsFmt = mdates.DateFormatter('%d %b %y')
ax.xaxis.set_major_formatter(yearsFmt)
grid.close()
bio=xr.open_mfdataset(f'/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data/ts_HC201905_{year}_{Mooring}.nc')
ik=0
ij=0
ii=0
%%time
tt=bio.time_counter
mod_nitrate=(bio.nitrate.isel(deptht=ik,y=ij,x=ii))
diatom=bio.diatoms.isel(deptht=ik,y=ij,x=ii)
flagellate=bio.flagellates.isel(deptht=ik,y=ij,x=ii)
ciliate=bio.ciliates.isel(deptht=ik,y=ij,x=ii)
mod_Chl=(diatom+flagellate+ciliate)*1.8
data.columns
obsvar='Chl'
modvar=mod_Chl
fig,ax=plt.subplots(1,1,figsize=(14,7))
ps=[]
p0,=ax.plot(data['dtUTC'],data[obsvar],'.',color='blue',label=f'Observed ')
ps.append(p0)
p0,=ax.plot(tt,modvar,'-',color='red',label='Modeled')
ps.append(p0)
ax.legend(handles=ps)
ax.set_ylabel(f'{obsvar}')
ax.set_xlabel('Date')
ax.set_title('Chlorophyll Timeseries')
plt.setp(ax.get_xticklabels(), rotation=30, horizontalalignment='right')
M = 15
xticks = mpl.ticker.MaxNLocator(M)
ax.xaxis.set_major_locator(xticks)
yearsFmt = mdates.DateFormatter('%d %b %y')
ax.xaxis.set_major_formatter(yearsFmt)
obsvar='NO23'
modvar=mod_nitrate
fig,ax=plt.subplots(1,1,figsize=(14,7))
ps=[]
p0,=ax.plot(data['dtUTC'],data[obsvar],'.',color='blue',label=f'Observed ')
ps.append(p0)
p0,=ax.plot(tt,modvar,'-',color='red',label='Modeled')
ps.append(p0)
ax.legend(handles=ps)
ax.set_ylabel(f'{obsvar}')
ax.set_ylim((0,40))
ax.set_xlabel('Date')
ax.set_title('Chlorophyll Timeseries')
plt.setp(ax.get_xticklabels(), rotation=30, horizontalalignment='right')
M = 15
xticks = mpl.ticker.MaxNLocator(M)
ax.xaxis.set_major_locator(xticks)
yearsFmt = mdates.DateFormatter('%d %b %y')
ax.xaxis.set_major_formatter(yearsFmt)
```
|
github_jupyter
|
```
# Dependencies
import tweepy
import json
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import time
from pprint import pprint
# Import and Initialize Sentiment Analyzer
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
#import Twitter API keys
from bot_config import access_token,access_token_secret,consumer_key,consumer_secret
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
#target_handle
# Search through bot tweets for new handles to analyze
#bot ="@brgrave1_graves"
#public_tweets = api.user_timeline(bot)
#pprint(public_tweets)
print(tweet["in_reply_to_screen_name"])
#define target search from twitter bots account
#target_search= ("@brgrave1_graves:@%s" % handle_requested)
# Search for all tweets
#public_tweets=api.status_lookup(target_search,show_user=True)
#pprint(public_tweets)
#scan my account for new analysis to run
def Scan_Account():
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Search for all tweets
public_tweets=api.user_timeline()
# Loop through all tweets
for tweet in public_tweets:
# Get ID and Author of most recent tweet directed to me
tweet_id = tweet["id"]
tweet_author = tweet["user"]["screen_name"]
handle_requested = tweet["in_reply_to_screen_name"]
# Use Try-Except to avoid the duplicate error
try:
# Respond to tweet & perform sentiment analysis
target_handle = ("@%s" % handle_requested)
# Counter
counter = 1
# Variables for holding sentiments
compound_sentiments = []
tweets_ago = []
# Loop through 5 pages of tweets (total 100 tweets)
for x in range(5):
# Get all tweets from home feed
public_tweets = api.user_timeline(target_handle)
# Loop through all tweets
for tweet in public_tweets:
# Run Vader Analysis on each tweet
compound = analyzer.polarity_scores(tweet["text"])["compound"]
pos = analyzer.polarity_scores(tweet["text"])["pos"]
neu = analyzer.polarity_scores(tweet["text"])["neu"]
neg = analyzer.polarity_scores(tweet["text"])["neg"]
# Add to counter
counter = counter + 1
# Add sentiments for each tweet into an array
compound_sentiments.append(compound)
tweets_ago.append(counter)
except Exception:
print("Already responded to this one!")
# Print message if duplicate
#creating the chart function
def analysis_chart():
sns.set()
#creating chart
plt.plot(tweets_ago, compound_sentiments, linewidth=0.5, marker="o", color="blue")
# Add labels to the x and y axes
plt.title("Sentiment Analysis of %s's tweets" % target_handle)
plt.ylabel("Tweet Polarity")
plt.xlabel("Tweets Ago")
#adding legend
legend = ("%s's tweets" % tweet_author)
plt.legend(legend, bbox_to_anchor=(1.1, 1),
fancybox=True, shadow=True, ncol=1, loc='upper center', title="Tweets")
# Set a grid on the plot
plt.grid(True)
# Add a semi-transparent horizontal line at y = 0
plt.hlines(0, 0, 10, alpha=0.9)
plt.savefig("Sentiment Analysis of tweets.jpg")
# tweet analysis back to twitter
api.update_with_media("Sentiment Analysis of tweets.jpg")
counter = 0
for counter in range(3):
analysis_chart()
time.sleep(60)
#scan my account for new analysis to run
def Scan_Account():
# Setup Tweepy API Authentication
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Search for all tweets
bot_tweets = api.search(bot, count=100, result_type="recent")
# Loop through all tweets
for tweet in bot_tweets["statuses"]:
# Get ID and Author of most recent tweet directed to me
tweet_id = tweet["id"]
tweet_author = tweet["user"]["screen_name"]
#create loop to check if analysis has been run on the target_handle by looping through ayalysis_already_run
for tweet_author in handles_analyzed:
if tweet_author == handles_analzyed:
#tweet submitter of analysis this analysis has already been run
api.udate_status("Sentiment Analysis has already beeen performed on @%s! Please feel free to submit another"
% tweet_author, reply_to_id=tweet_id)
else:
#add handle to list of analyzed twitter handles to not clutter timeline
handles_analyzed.append(tweet_author)
#perform sentiment analysis
target_handle = ("@%s!" % tweet_author)
# Counter
counter = 1
# Variables for holding sentiments
handle_sentiments = []
# Loop through 5 pages of tweets (total 100 tweets)
for x in range(5):
# Get all tweets from home feed
public_tweets = api.user_timeline(target_handle)
# Loop through all tweets
for tweet in public_tweets:
# Run Vader Analysis on each tweet
compound = analyzer.polarity_scores(tweet["text"])["compound"]
pos = analyzer.polarity_scores(tweet["text"])["pos"]
neu = analyzer.polarity_scores(tweet["text"])["neu"]
neg = analyzer.polarity_scores(tweet["text"])["neg"]
tweets_ago = counter
# Add sentiments for each tweet into an array
handle_sentiments.append({"Twitter Handle":target_handle,
"Compound": compound,
"Positive": pos,
"Negative": neu,
"Neutral": neg,
"Tweets Ago": counter})
# Add to counter
counter = counter + 1
for handle_requested in handles_analyzed:
if handle_requested == handle_requested:
#tweet submitter of analysis this analysis has already been run
api.udate_status("Sentiment Analysis has already beeen performed on @%s! Please feel free to submit another"
% handle_requested,in_reply_to_status_id=tweet_id )
else:
#add handle to list of analyzed twitter handles to not clutter timeline
handles_analyzed.append(handle_requested)
target_handle = ("@%s" % tweet_author)
# Counter
counter = 1
# Variables for holding sentiments
bbc_sentiments = []
# Loop through 5 pages of tweets (total 100 tweets)
for x in range(5):
# Get all tweets from home feed
public_tweets = api.user_timeline(target_handle)
# Loop through all tweets
for tweet in public_tweets:
# Run Vader Analysis on each tweet
compound = analyzer.polarity_scores(tweet["text"])["compound"]
pos = analyzer.polarity_scores(tweet["text"])["pos"]
neu = analyzer.polarity_scores(tweet["text"])["neu"]
neg = analyzer.polarity_scores(tweet["text"])["neg"]
tweets_ago = counter
# Add sentiments for each tweet into an array
bbc_sentiments.append({"Media Source":target_handle,
"Compound": compound,
"Positive": pos,
"Negative": neu,
"Neutral": neg,
"Date": tweet["created_at"],
"Tweets Ago": counter,
"Tweet": tweet["text"]})
# Add to counter
counter = counter + 1
#perform sentiment analysis
target_handle = ("@%s" % handle_requested)
# Counter
counter = 1
# Variables for holding sentiments
compound_sentiments = []
tweets_ago = []
# Loop through 5 pages of tweets (total 100 tweets)
for x in range(5):
# Get all tweets from home feed
public_tweets = api.user_timeline(target_handle)
# Loop through all tweets
for tweet in public_tweets:
# Run Vader Analysis on each tweet
compound = analyzer.polarity_scores(tweet["text"])["compound"]
pos = analyzer.polarity_scores(tweet["text"])["pos"]
neu = analyzer.polarity_scores(tweet["text"])["neu"]
neg = analyzer.polarity_scores(tweet["text"])["neg"]
# Add to counter
counter = counter + 1
# Add sentiments for each tweet into an array
compound_sentiments.append(compound)
tweets_ago.append(counter)
# Create converse function
def Converse(line_number):
# Find the latest tweet from conversation_partner
public_tweets = api.search(conversation_partner, count=1, result_type="recent")
for tweet in public_tweets["statuses"]:
print(tweet)
# Respond to the tweet with one of the response lines
tweet_id = tweet["id"]
print(tweet_id)
print(tweet["text"])
api.update_status(
response_lines[line_number],
in_reply_to_status_id=tweet_id)
# Create Thank You Function
def ThankYou():
# Twitter Credentials
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth, parser=tweepy.parsers.JSONParser())
# Search for all tweets
public_tweets = api.search(target_term, count=100, result_type="recent")
# Loop through all tweets
for tweet in public_tweets["statuses"]:
# Get ID and Author of most recent tweet directed to me
tweet_id = tweet["id"]
tweet_author = tweet["user"]["screen_name"]
# Print the tweet_id
print(tweet_id)
# Use Try-Except to avoid the duplicate error
try:
# Respond to tweet
api.update_status(
"Thank you @%s! Come again!" %
tweet_author,
in_reply_to_status_id=tweet_id)
# Print success message
print("Successful response!")
except Exception: # Print message if duplicate
print("Already responded to this one!")
# Print message to signify complete cycle
print("We're done for now. I'll check again in 60 seconds.")
```
|
github_jupyter
|
# Online prediction for radon-small
In online mode, the model is learning as soon as a new data arrives.
It means that when we want our prediction we don't need to provide feature vector,
since all data was already processed by the model.
Explore the following models:
* Constant model - The same value for all future points
* Previous day model - Next day is the same like previous day
* Daily Pattern model - Calculate daily pattern from historical data. Use it as next day prediction.
```
import datetime
import calendar
import pprint
import json
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['figure.figsize'] = 12, 4
```
# Load project
```
project_folder = '../../datasets/radon-small/'
with open(project_folder + 'project.json', 'r') as file:
project = json.load(file)
pprint.pprint(project)
print('Flow1')
flow = pd.read_csv(project_folder + 'flow1.csv', parse_dates=['time'])
flow = flow.set_index('time')['flow'].fillna(0)
flow = flow.resample('5T').pad()
flow.head()
```
## Helper functions
Helper functions for building training and test sets and calculating score
```
class PredictionModel:
def fit(self, data_points):
pass
def predict(self, prediction_day):
pass
def mae(y_hat, y):
"""
Calculate Mean Absolute Error
This metric is better here since serries have quite big outliers
"""
return np.sum(np.absolute(y_hat-y))/y.shape[0]
def split_data(split_day):
"""Get all data up to given day"""
end_day = split_day - pd.Timedelta('1 min')
return flow[:end_day]
def evaluate_day(model, split_day):
"""Evaluate data for single day"""
xs = split_data(split_day)
next_day = split_day + pd.Timedelta(1, 'D')
y = flow[next_day: next_day+pd.Timedelta('1439 min')]
model.fit(xs)
y_hat = model.predict(next_day)
return mae(y_hat, y)
def evaluate_model(model, start_day):
"""
Evaluate model on all days starting from split_day.
Returns 90th percentile error as model score
"""
last_day = pd.Timestamp(project['end-date'])
split_day = start_day
costs = []
while split_day < last_day:
cost = evaluate_day(model, split_day)
costs.append(cost)
split_day += pd.Timedelta(1, 'D')
return np.percentile(costs, 90), costs
split_data(pd.Timestamp('2016-11-10')).tail()
```
# Models
# ConstMeanModel
```
class ConstantMeanModel(PredictionModel):
def __init__(self):
self.mu = 0
def fit(self, xs):
self.mu = np.mean(xs)
def predict(self, day):
return np.ones(12*24) * self.mu
score, costs = evaluate_model(ConstantMeanModel(), pd.Timestamp('2016-11-11'))
print('ConstantMeanModel score: {:.2f}'.format(score))
```
## Previous Day Model
Uses values from last day
```
class LastDayModel(PredictionModel):
def fit(self, xs):
self.y = xs.values[-288:]
def predict(self, day):
return self.y
score, costs = evaluate_model(LastDayModel(), pd.Timestamp('2016-11-11'))
print('LastDayModel score: {:.2f}'.format(score))
```
Model for single day. Easy case
```
evaluate_day(LastDayModel(), pd.Timestamp('2016-11-11'))
```
And when next day is kind of outlier
```
evaluate_day(LastDayModel(), pd.Timestamp('2017-05-01'))
```
## Daily Pattern model
Create pattern of daily usage based on historical data. Use this pattern to predict next values
(This can take up to 10 minutes to calculate)
```
class DailyPatternModel(PredictionModel):
def fit(self, xs):
df = flow.to_frame().reset_index()
self.daily_pattern = df.groupby(by=[df.time.map(lambda x : (x.hour, x.minute))]).flow.mean().values
def predict(self, day):
return self.daily_pattern
score, costs = evaluate_model(DailyPatternModel(), pd.Timestamp('2016-11-11'))
print('DailyPatternModel score: {:.2f}'.format(score))
```
### Daily Pattern Median Model
Calculate median value for each time. Use it as a prediction for the next day.
```
class DayMedianModel(PredictionModel):
def fit(self, xs):
df = flow.to_frame().reset_index()
self.daily_pattern = df.groupby(by=[df.time.map(lambda x : (x.hour, x.minute))]).flow.median().values
def predict(self, day):
return self.daily_pattern
score, costs = evaluate_model(DayMedianModel(), pd.Timestamp('2016-11-11'))
print('DayModel score: {:.2f}'.format(score))
```
## Daily pattern with last value correction
This model calculates daily pattern, but also corrects it based on previous value
$$ x_{t} = \alpha (x_{t-1} - dp(t-1)) + dp(t)$$
where
- dp - daily pattern
|
github_jupyter
|
# classificate speaking_audio files
```
# 균형있게 구성하기
# 1. 성별 50:50
# 2. 지역 25:25:25:25
# 각 지역별로 남 10, 여 10명
# 총 80명.
import os
import shutil
import random
from typing_extensions import final
A = [] # 강원
B = [] # 서울/경기
C = [] # 경상
D = [] # 전라
E = [] # 제주(현재 없음)
F = [] # 충청(현재 없음)
G = [] # 기타(현재 없음)
region = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
# 각 파일들을 지역별로 분류합니다.
# 노인 음성 데이터셋이 있는 디렉토리
basic_path = os.path.join('../Dataset_audio/old_total')
for i in region:
os.makedirs(basic_path + '/' + i)
for (path, dir, files) in os.walk(basic_path):
for filename in files:
ext = os.path.splitext(filename)[-1]
if ext == '.wav':
if os.path.splitext(filename)[0][-1] == 'A':
A.append(filename)
shutil.move(
os.path.join(path, filename),
os.path.join(basic_path, 'A', filename)
)
elif os.path.splitext(filename)[0][-1] == 'B':
B.append(filename)
shutil.move(
os.path.join(path, filename),
os.path.join(basic_path, 'B', filename)
)
elif os.path.splitext(filename)[0][-1] == 'C':
C.append(filename)
shutil.move(
os.path.join(path, filename),
os.path.join(basic_path, 'C', filename)
)
elif os.path.splitext(filename)[0][-1] == 'D':
D.append(filename)
shutil.move(
os.path.join(path, filename),
os.path.join(basic_path, 'D', filename)
)
elif os.path.splitext(filename)[0][-1] == 'E':
E.append(filename)
shutil.move(
os.path.join(path, filename),
os.path.join(basic_path, 'E', filename)
)
elif os.path.splitext(filename)[0][-1] == 'F':
F.append(filename)
shutil.move(
os.path.join(path, filename),
os.path.join(basic_path, 'F', filename)
)
elif os.path.splitext(filename)[0][-1] == 'G':
G.append(filename)
shutil.move(
os.path.join(path, filename),
os.path.join(basic_path, 'G', filename)
)
for i in [A, B, C, D, E, F, G]:
print('file_num: ', len(i))
# 지역별로 나눈 파일을 성별로 나눕니다.
M=[]
F=[]
for i in region:
print(i)
for (path, dir, files) in os.walk(os.path.join(basic_path, i)):
for filename in files:
ext = os.path.splitext(filename)[-1]
if ext == '.wav':
if os.path.splitext(filename)[0][-6] == 'M':
#print(filename, 'M')
M.append(filename)
try:
os.mkdir(
os.path.join(basic_path, i, 'M')
)
except FileExistsError:
pass
shutil.move(
os.path.join(basic_path, i, filename),
os.path.join(basic_path, i, 'M', filename)
)
elif os.path.splitext(filename)[0][-6] == 'F':
#print(filename, 'F')
F.append(filename)
try:
os.mkdir(
os.path.join(basic_path, i, 'F')
)
except FileExistsError:
pass
shutil.move(
os.path.join(basic_path, i, filename),
os.path.join(basic_path, i, 'F', filename)
)
else:
print('Cannot find gender')
# 3. 각 지역별로 최대 남 100, 여 100명의 화자를 선정합니다.
# 랜덤 선정
# random.sample(list, n_sample)
target_path = os.path.join('../Dataset_audio/old_total')
def speaker_select(target_path):
region = ['A', 'B', 'C', 'D', 'E', 'F']
gender = ['M', 'F']
result = []
for i in region:
for g in gender:
print(i, '-', g)
try:
by_gender_files = os.listdir(os.path.join(target_path, i, g))
by_gender_speaker = [file[:6] for file in by_gender_files]
selected_speaker = random.sample(by_gender_speaker, 100)
result.append(selected_speaker)
print('num of selected_speaker: ', len(list(set(selected_speaker))))
except FileNotFoundError:
pass
return result
selected_speakers = speaker_select(target_path)
# file select
target_path = r'../Dataset_audio/old_total'
def file_select(target_path, selected_speakers):
err_count = []
region = ['A', 'B', 'C', 'D', 'E', 'F']
for i in region:
print(i)
for (path, dir, files) in os.walk(os.path.join(target_path, i)):
for filename in files:
# ext = os.path.splitext(filename)[-1]
# if ext == '.wav':
speaker = filename[:6]
g = os.path.splitext(filename)[0][-6]
for x in selected_speakers:
if speaker in x:
#print('he/she is selected speaker.')
if g == 'M':
#print('{} is male'.format(speaker))
try:
os.makedirs(
os.path.join(target_path, i, 'selected_M', speaker)
)
except:
pass
shutil.copy(
os.path.join(target_path, i, 'M', filename),
os.path.join(target_path, i, 'selected_M', speaker, filename)
)
elif g == 'F':
#print('{} is female'.format(speaker))
try:
os.makedirs(
os.path.join(target_path, i, 'selected_F', speaker)
)
except:
pass
shutil.copy(
os.path.join(target_path, i, 'F', filename),
os.path.join(target_path, i, 'selected_F', speaker, filename)
)
else:
print('cannot found gender')
err_count.append(filename)
print(err_count)
file_select(target_path, selected_speakers)
# selected_folders에 있는 파일 찾기
# 한 화자당 최대 30개씩
target_path = r'../Dataset_audio/old_total'
selected_folders = ['selected_M', 'selected_F']
def finding_selected_files(folder_name_list):
filenames_random = []
for i in region:
for (path, dir, files) in os.walk(target_path + '/' + i):
#print('current path:', path)
#print('curren dir:', dir)
if path.split('/')[-2] in folder_name_list:
filenames = []
for filename in files:
#print('filename: ', filename)
ext = os.path.splitext(filename)[-1]
if ext == '.wav':
filenames.append(filename)
filenames_random += random.sample(filenames, min(len(filenames), 30)) #최대 30
return filenames_random
selected_files = finding_selected_files(selected_folders)
len(selected_files)
# 랜덤으로 선택한 파일을 복사하기
speaking_path = r'../Dataset_audio/Speaking'
def final_selected_files(new_path, filename_list):
target_path = r'../Dataset_audio/old_total'
for (path, dir, files) in os.walk(target_path):
for filename in files:
if filename in filename_list:
try:
shutil.copy(
os.path.join(path, filename),
os.path.join(new_path, filename)
)
#print(os.path.join(path, filename))
#print(os.path.join(new_path, filename), 'copied')
except FileNotFoundError:
pass
final_selected_files(speaking_path, selected_files)
len(os.listdir(r'../Dataset_audio/Speaking'))
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/maiormarso/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/module3-make-explanatory-visualizations/LS_DS_123_Make_Explanatory_Visualizations_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# ASSIGNMEN
### 1) Replt
icate the lesson code. I recommend that you [do not copy-paste](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit).
Get caught up to where we got our example in class and then try and take things further. How close to "pixel perfect" can you make the lecture graph?
Once you have something that you're proud of, share your graph in the cohort channel and move on to the second exercise.
### 2) Reproduce another example from [FiveThityEight's shared data repository](https://data.fivethirtyeight.com/).
**WARNING**: There are a lot of very custom graphs and tables at the above link. I **highly** recommend not trying to reproduce any that look like a table of values or something really different from the graph types that we are already familiar with. Search through the posts until you find a graph type that you are more or less familiar with: histogram, bar chart, stacked bar chart, line chart, [seaborn relplot](https://seaborn.pydata.org/generated/seaborn.relplot.html), etc. Recreating some of the graphics that 538 uses would be a lot easier in Adobe photoshop/illustrator than with matplotlib.
- If you put in some time to find a graph that looks "easy" to replicate you'll probably find that it's not as easy as you thought.
- If you start with a graph that looks hard to replicate you'll probably run up against a brick wall and be disappointed with your afternoon.
```
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
plt.style.use('fivethirtyeight')
import numpy as np
import pandas as pd
fake = pd.Series([38,3,2,1,2,4,6,5,5,33])
fig = plt.figure()
fig.patch.set(facecolor='white')
ax = ax=fake.plot.bar(color='C1', width=0.9)
ax.set(facecolor='White')
plt.xlabel('xlabel')
plt.ylabel('')
ax.text(x=-1.8, y=44, s='An Inconvenient',
fontweight='bold', fontsize= 12)
ax.text(x=-1.8, y=41.5, s='IMDb ratings', fontsize=11)
ax.set_ylabel('percent y label', fontsize=9, fontweight='bold')
ax.set_xlabel('Rating', fontsize=9, fontweight='bold', labelpad=10)
ax.set_xticklabels([1,2,3,4,5,6,7,8,9,10], rotation=0)
ax.set_yticks(range(0,50,10))
fmt='%.0f%%'
xticks = mtick.FormatStrFormatter(fmt)
ax.text(x=-1.8, y=44, s="An Inconvenient Sequel: 'Truth To Power' is divisive", fontsize=12, fontweight='bold')
ax.text(x=-1.8, y=41.5, s='IMBb ratings for the film as of Aug 29', fontsize=11)
ax.set_yticklabels(range(0,50,10))
fmt='%.0f%%'
xticks = mtick.FormatStrFormatter(fmt)
ax.yaxis.set_major_formatter(xticks)
plt.plot();
import matplotlib.pyplot as plt
import pandas as pd
import matplotlib.ticker as mtick
plt.style.use('fivethirtyeight')
fake =pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33],
index=range(1,11))
#generate the figure
fig = plt.figure()
fig.patch.set(facecolor='palegreen')
# generate the axes (center section) for the plot
ax = ax=fake.plot.bar(color='C1', width=0.9)
ax.set(facecolor='White')
#11:am
#matplotlib.pyplot.text
ax.text(x=-1.8, y=45, s="An Inconvenient Sequel: 'Truth To Power' is divisive", fontsize=12, fontweight='bold')
ax.text(x=-1.8, y=42.5, s='IMBb ratings for the film as of Aug 29', fontsize=11)
ax.text(x=-.8, y=39.3, s='%', fontsize=11, color='gray')
ax.set_ylabel('Percent of total votes', fontsize=9, fontweight='bold')
ax.set_xlabel('Rating', fontsize=9, fontweight='bold', labelpad=10)
# fix our tick lables
ax.set_xticklabels([1,2,3,4,5,6,7,8,9,10], color='gray', rotation=0) # (range(1,11))
ax.set_yticks(range(0,50,10))
ax.set_yticklabels(range(0, 50 , 10), color='gray',)
# fmt='%.0f%%'
# xticks = mtick.FormatStrFormatter(fmt)
# ax.yaxis.set_major_formatter(xticks)
plt.show()
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(="darkgrid")
import datetime
Qwelian Tanner helped me with the code
approve = [.455,.500,.488,.537,.549,.560,.543,.574,.536,.567,.556,.575,.543,.532,.538,.543,.520,.516,.524,.537,.520,.525,.521,.558,.536,.532,.530,.526,.528,.526,.537,.539]
disapprove = [.413,.438,.447,.400,.391,.381,.396,.366,.395,.375,.382,.364,.402,.415,.405,.403,.420,.424,.421,.399,.425,.422,.424,.398,.419,.416,.420,.424,.424,.430,.421,.415]
dates = ['Jan23','Feb20','Mar11','Apr4','May21','Jun11','Jul1','Aug7','Sep21','Oct26','Nov28','Dec16','Jan17','Feb15','Mar18','Apr21','May10','Jun23','Aug16','Sep14','Oct8','Nov9','Dec9','Jan29','Feb19','Mar18','Apr17','May8','Jun14','Juj22','Aug18','Sep11']
numdays = 32
base = datetime.datetime.today()
dates = [base - datetime.timedelta(days=x) for x in range(numdays)]
fill = {
'dates': dates,
'approval': approve,
'dissaproval': disapprove,
}
# Calling DataFrame constructor on list
df1 = pd.DataFrame(data=fill)
df1.head(1)
df1.plot(x="dates", y=["approval", "dissaproval"], kind="line");
```
# STRETCH OPTIONS
### 1) Reproduce one of the following using the matplotlib or seaborn libraries:
- [thanksgiving-2015](https://fivethirtyeight.com/features/heres-what-your-part-of-america-eats-on-thanksgiving/)
- [candy-power-ranking](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/)
- or another example of your choice!
### 2) Make more charts!
Choose a chart you want to make, from [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary).
Find the chart in an example gallery of a Python data visualization library:
- [Seaborn](http://seaborn.pydata.org/examples/index.html)
- [Altair](https://altair-viz.github.io/gallery/index.html)
- [Matplotlib](https://matplotlib.org/gallery.html)
- [Pandas](https://pandas.pydata.org/pandas-docs/stable/visualization.html)
Reproduce the chart. [Optionally, try the "Ben Franklin Method."](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) If you want, experiment and make changes.
Take notes. Consider sharing your work with your cohort!
```
```
|
github_jupyter
|
## Exploratory data analysis of Dranse discharge data
Summary: The data is stationary even without differencing, but ACF and PACF plots show that an hourly first order difference and a periodic 24h first order difference is needed for SARIMA fitting.
Note: Final fitting done in Google Colab due to memory constraints - this notebook will throw some errors
## SARIMAX model fitting
### 1.) Loading the river flow (discharge) data
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from river_forecast.training_data_access import get_combined_flow
flow_df = get_combined_flow()
plt.plot(flow_df.index, flow_df)
```
### Exploratory Analysis
```
subset_df = flow_df.loc[:]
subset_df['year'] = subset_df.index.year
subset_df['offset_datetime'] = subset_df.index + pd.DateOffset(year=2019)
sns.set(style="whitegrid")
sns.set(rc={'figure.figsize':(15, 8)})
ax = sns.lineplot(x='offset_datetime', y='discharge', hue='year', data=subset_df, markers='')
import matplotlib.dates as mdates
myFmt = mdates.DateFormatter('%b')
ax.get_xaxis().set_major_formatter(myFmt)
ax.set_xlabel('Month')
ax.set_ylabel('Discharge (m^3/s)')
```
### train-test split
```
import statsmodels.api as sm
train = flow_df.loc[flow_df.index < pd.to_datetime('2019-01-01 00:00:00')]
test = flow_df.loc[(flow_df.index >= pd.to_datetime('2019-01-01 00:00:00')) & (flow_df.index < pd.to_datetime('2019-07-01 00:00:00'))]
fig, ax = plt.subplots()
train.plot(ax=ax, label='train')
test.plot(ax=ax, label='test')
plt.legend()
plt.show()
```
### Time series stationarity analysis
```
import statsmodels.formula.api as smf
import statsmodels.tsa.api as smt
import statsmodels.api as sm
import scipy.stats as scs
def tsplot(y, lags=None, figsize=(12, 7), style='bmh'):
"""
Plot time series, its ACF and PACF, calculate Dickey–Fuller test
-> Adapted from https://gist.github.com/DmitrySerg/14c1af2c1744bb9931d1eae6d9713b21
y - timeseries
lags - how many lags to include in ACF, PACF calculation
"""
if not isinstance(y, pd.Series):
y = pd.Series(y)
with plt.style.context(style):
fig = plt.figure(figsize=figsize)
layout = (2, 2)
ts_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
acf_ax = plt.subplot2grid(layout, (1, 0))
pacf_ax = plt.subplot2grid(layout, (1, 1))
y.plot(ax=ts_ax)
t_statistic, p_value = sm.tsa.stattools.adfuller(y)[:2]
ts_ax.set_title('Time Series Analysis Plots\n Dickey-Fuller: p={0:.5f}'.format(p_value))
smt.graphics.plot_acf(y, lags=lags, ax=acf_ax)
smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax)
plt.tight_layout()
```
#### Augmenteded Dicky-Fuller to check for stationarity
```
flow = flow_df['discharge']
flow_diff_1 = (flow - flow.shift(1)).dropna()
flow_diff_1_24 = (flow_diff_1 - flow_diff_1.shift(24)).dropna()
flow_diff_24 = (flow - flow.shift(24)).dropna()
tsplot(flow, lags=24*5, figsize=(12, 7))
tsplot(flow_diff_1, lags=24*5, figsize=(12, 7))
tsplot(flow_diff_1_24, lags=24*7, figsize=(12, 7))
tsplot(flow_diff_1_24, lags=12, figsize=(12, 7))
```
#### Fitting SARIMAX
```
train['discharge'].plot()
from statsmodels.tsa.statespace.sarimax import SARIMAX
### Crashed again upon completion, make sure the time series is ok -> computation moved to Colab
# Create a SARIMAX model
model = SARIMAX(train['discharge'], order=(4,1,1), seasonal_order=(0,1,1,24))
# p - try 0, 1, 2, 3, 4; q is cleary one. Q is clearly 1, P is tapering off: 0.
# Fit the model
results = model.fit()
import pickle
pickle.dump(results.params, open('../models/sarimax_211_011-24_model-parameters.pkl', 'wb'))
### # load model
### loaded = ARIMAResults.load('model.pkl')
results = pickle.load(open('../models/sarimax_211_011-24_model.pkl', 'rb'))
pwd
# Print the results summary
print(results.summary())
results
```
#### Plotting the forecast
```
# Generate predictions
one_step_forecast = results.get_prediction(start=-48)
# Extract prediction mean
mean_forecast = one_step_forecast.predicted_mean
# Get confidence intervals of predictions
confidence_intervals = one_step_forecast.conf_int()
# Select lower and upper confidence limits
lower_limits = confidence_intervals.loc[:, 'lower discharge']
upper_limits = confidence_intervals.loc[:, 'upper discharge']
# plot the dranse data
# plot your mean predictions
plt.plot(mean_forecast.index, mean_forecast, color='r', label='forecast')
# shade the area between your confidence limits
plt.fill_between(lower_limits.index, lower_limits,
upper_limits, color='pink')
# set labels, legends and show plot
plt.xlabel('Date')
plt.ylabel('Discharge')
plt.title('hourly forecaset')
plt.legend()
plt.show()
# Generate predictions
dynamic_forecast = results.get_prediction(start=-6, dynamic=True)
# Extract prediction mean
mean_forecast = dynamic_forecast.predicted_mean
# Get confidence intervals of predictions
confidence_intervals = dynamic_forecast.conf_int(alpha=0.32) # 95 percent confidence interval
# Select lower and upper confidence limits
lower_limits = confidence_intervals.loc[:,'lower discharge']
upper_limits = confidence_intervals.loc[:,'upper discharge']
# plot your mean predictions
plt.plot(mean_forecast.index, mean_forecast, color='r', label='forecast')
# shade the area between your confidence limits
plt.fill_between(lower_limits.index, lower_limits,
upper_limits, color='pink', alpha=0.5)
# set labels, legends and show plot
plt.xlabel('Date')
plt.ylabel('Discharge')
plt.title('dynamic forecast')
plt.legend()
```
#### Finding the best model manually
```
# Create empty list to store search results
order_aic_bic=[]
# Loop over p values from 0-2
for p in range(0, 5):
print(p)
# create and fit ARMA(p,q) model
model = SARIMAX(train['discharge'], order=(p,1,1), seasonal_order=(0,1,1,24))
# p - try 0, 1, 2, 3, 4; q is cleary one. Q is clearly 1, P is tapering off: 0.
results = model.fit()
# Append order and results tuple
order_aic_bic.append((p,results.aic, results.bic))
# Construct DataFrame from order_aic_bic
order_df = pd.DataFrame(order_aic_bic,
columns=['p', 'AIC', 'BIC'])
# Print order_df in order of increasing AIC
print(order_df.sort_values('AIC'))
# Print order_df in order of increasing BIC
print(order_df.sort_values('BIC'))
# Create the 4 diagostics plots
results.plot_diagnostics()
plt.show()
# Print summary
print(results.summary())
```
### Forecasting
```
results.forecast(steps=6)
resB.forecast(steps=6)
import river_forecast.api_data_access
import importlib, sys
importlib.reload(river_forecast.api_data_access)
rivermap_data = river_forecast.api_data_access.RivermapDataRetriever()
recent_flow_df = rivermap_data.get_latest_river_flow(n_days=3, station='Dranse')
recent_flow_df
modelB = SARIMAX(recent_flow_df.iloc[:2].asfreq('h'), order=(4,1,1), seasonal_order=(0,1,1,24))
resB = modelB.smooth(results.params)
resB.forecast(steps=6)
from river_forecast.api_data_access import RivermapDataRetriever
data = RivermapDataRetriever().get_standard_dranse_data()
data
import importlib
import river_forecast.forecast
importlib.reload(river_forecast.forecast)
sf = river_forecast.forecast.SARIMAXForecast()
sf.generate_prediction_plot(data)
sf.dynamic_forecast(data)
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
from pymedphys_monomanage.tree import PackageTree
import networkx as nx
from copy import copy
package_tree = PackageTree('../../packages')
package_tree.package_dependencies_digraph
package_tree.roots
modules = list(package_tree.digraph.neighbors('pymedphys_analysis'))
modules
internal_packages = copy(package_tree.roots)
internal_packages.remove('pymedphys')
module_paths = [
item
for package in internal_packages
for item in package_tree.digraph.neighbors(package)
]
modules = {
item: os.path.splitext(item)[0].replace(os.sep, '.')
for item in module_paths
}
modules
module_digraph = nx.DiGraph()
dependencies = {
module.replace(os.sep, '.'): [
'.'.join(item.split('.')[0:2])
for item in
package_tree.descendants_dependencies(module)['internal_module'] + package_tree.descendants_dependencies(module)['internal_package']
]
for module in modules.keys()
}
dependencies
dependents = {
key: [] for key in dependencies.keys()
}
for key, values in dependencies.items():
for item in values:
dependents[item].append(key)
dependents
current_modules = [
item.replace(os.sep, '.')
for item in package_tree.digraph.neighbors('pymedphys_analysis')
]
current_modules
def remove_prefix(text, prefix):
if text.startswith(prefix):
return text[len(prefix):]
else:
return text
graphed_module = 'pymedphys_monomanage'
current_modules = [
item.replace(os.sep, '.')
for item in package_tree.digraph.neighbors(graphed_module)
]
current_modules
def simplify(text):
text = remove_prefix(text, "{}.".format(graphed_module))
text = remove_prefix(text, 'pymedphys_')
return text
current_modules
module_internal_relationships = {
module.replace(os.sep, '.'): [
'.'.join(item.split('.')[0:2])
for item in
package_tree.descendants_dependencies(module)['internal_module']
]
for module in package_tree.digraph.neighbors(graphed_module)
}
module_internal_relationships
dag = nx.DiGraph()
for key, values in module_internal_relationships.items():
dag.add_node(key)
dag.add_nodes_from(values)
edge_tuples = [
(key, value) for value in values
]
dag.add_edges_from(edge_tuples)
dag.edges()
def get_levels(dag):
topological = list(nx.topological_sort(dag))
level_map = {}
for package in topological[::-1]:
depencencies = nx.descendants(dag, package)
levels = {0}
for dependency in depencencies:
try:
levels.add(level_map[dependency])
except KeyError:
pass
max_level = max(levels)
level_map[package] = max_level + 1
levels = {
level: []
for level in range(max(level_map.values()) + 1)
}
for package, level in level_map.items():
levels[level].append(package)
return levels
levels = get_levels(dag)
levels
nodes = ""
for level in range(max(levels.keys()) + 1):
if levels[level]:
trimmed_nodes = [
simplify(node) for node in levels[level]
]
grouped_packages = '"; "'.join(trimmed_nodes)
nodes += """
{{ rank = same; "{}"; }}
""".format(grouped_packages)
print(nodes)
edges = ""
current_packages = ""
current_dependents = set()
current_dependencies = set()
for module in current_modules:
module_repr = simplify(module)
current_packages += '"{}";\n'.format(module_repr)
for dependency in dependencies[module]:
simplified = simplify(dependency)
edges += '"{}" -> "{}";\n'.format(module_repr, simplified)
if not dependency in current_modules:
current_dependencies.add(simplified)
for dependent in dependents[module]:
simplified = simplify(dependent)
edges += '"{}" -> "{}";\n'.format(simplified, module_repr)
if not dependent in current_modules:
current_dependents.add(simplified)
external_ranks = ""
if current_dependents:
grouped_dependents = '"; "'.join(current_dependents)
external_ranks += '{{ rank = same; "{}"; }}\n'.format(grouped_dependents)
if current_dependencies:
grouped_dependencies = '"; "'.join(current_dependencies)
external_ranks += '{{ rank = same; "{}"; }}\n'.format(grouped_dependencies)
print(edges)
dot_file_contents = """
strict digraph {{
rankdir = LR;
{}
subgraph cluster_0 {{
{}
label = "{}";
style = dashed;
{}
}}
{}
}}
""".format(external_ranks, current_packages, graphed_module, nodes, edges)
print(dot_file_contents)
subgraph cluster_0 {
style=filled;
color=lightgrey;
node [style=filled,color=white];
a0 -> a1 -> a2 -> a3;
label = "process #1";
}
package_tree.descendants_dependencies('pymedphys_monomanage/parse')
package_tree.imports
list(package_tree.digraph.nodes)
```
|
github_jupyter
|
##### M5_Idol_lyrics/SongTidy 폴더의 전처리 ipnb을 총정리하고, 잘못된 코드를 수정한 노트북
### 가사 데이터(song_tidy01) 전처리
**df = pd.read_csv('rawdata/song_data_raw_ver01.csv')**<br>
**!!!!!!!!!!!!!순서로 df(번호)로 지정!!!!!!!!!!!!!**
1. Data20180915/song_data_raw_ver01.csv 데이터로 시작함 (키스있는지체크)
- 제목에 리믹스,라이브,inst,영일중,ver 인 행
- 앨범에 나가수, 불명, 복면인 행
- 타이틀, 가사, 앨범에 히라가나/가타카나가 들어간 행
- is_title이 nan인 행을 '수록곡'으로 변경
- 가사에\r\r\n을 공백으로 변경
2. 히라가나/가타카나를 제거한 후에도 일본어 가사가 한글로 포함되어 있는 경우<br>--> contains로 확인한뒤 행제거 반복
3. 가사가 모두 영어, 중국어인 경우<br>--> 가사에 한글이 하나도 들어가지 않은 행 제거
4. creator칼럼을 lyricist, composer, arranger로 나눔 --> creator_tidy_kavin_ver02.ipynb참고<br>
**여기서 df4.to_csv('tidydata/tidy01.csv', index=False) 로 중간 저장**
5. 중복노래(띄어쓰기,대소문자,피처링에 의한)를 제거 --> song_tidy_yoon_ver01.ipynb참고<br>
**!!!확인해보니 크롤링시에 발매일순서대로 담기지 않았음. sort by 'artist', 'release_date'로 주고 중복제거하기.**<br>**여기서 df5.to_csv('tidydata/song_tidy01.csv', index=False) 로 저장**
-----------------
### 작사작곡 데이터(lyricist_tidy01) 전처리
**다시 전파일로 불러오기 df6 = pd.read_csv('tidydata/tidy01.csv')**
6. creator가 없는 행 제거
7. 중복노래(띄어쓰기,대소문자,피처링에 의한)를 제거<br>
**여기서 df7.to_csv('tidydata/lyricist_tidy01.csv', index=False) 로 저장**
```
import pandas as pd
import re
df1 = pd.read_csv('C:/Users/pje17/Desktop/Lyricsis/M5_Idol_lyrics/Data/Data20180921/song_data_raw_20180921_ver02.csv')
df1.head()
# 키스없음확인
df1[df1['artist'] == '키스']
df1.shape
# 인덱스칼럼 드랍
df1 = df1.drop(df1.columns[0], axis=1)
df1
# 가사 정보 없는 행 드랍
df1 = df1[df1.lyrics.notnull()]
df1.shape
# 공백 및 줄바꿈 바꿔주기
df1['lyrics'] = df1['lyrics'].str.replace(r'\r\r\r\n|\r\r\n','<br>')
df1['creator'] = df1['creator'].str.replace(r'\r|\n',' ')
df1
# is_title이 nan일 때 = '수록곡'
df1['is_title'] = df1['is_title'].fillna('수록곡')
df1
# 제목에 리믹스,라이브,inst,영일중,ver 인 행 제거
df1 = df1[df1.title.str.contains(r'\(.*\s*([Rr]emix|[Mm]ix|[Ll]ive|[Ii]nst|[Cc]hn|[Jj]ap|[Ee]ng|[Vv]er)\s*.*\)') == False]
# 앨범에 나가수, 불명, 복면인 행 제거
df1 = df1[df1.album.str.contains(r'(가수다|불후의|복면가왕)') == False]
# 타이틀에 히라가나/가타카나가 들어간 행 삭제
df1 = df1[df1.title.str.contains(u'[\u3040-\u309F\u30A0-\u30FF\u31F0-\u31FF]+') == False]
df1
# 안지워진 행이 있음
df1.loc[df1['lyrics'].str.contains((u'[\u3040-\u309F\u30A0-\u30FF\u31F0-\u31FF]+'), regex=True)]
# 한 번 더 삭제
df1 = df1[df1.lyrics.str.contains(u'[\u3040-\u309F\u30A0-\u30FF\u31F0-\u31FF]+') == False]
df1
# 다 삭제 된 것 확인
df1.loc[df1['lyrics'].str.contains((u'[\u3040-\u309F\u30A0-\u30FF\u31F0-\u31FF]+'), regex=True)]
df1.info()
```
## --------------------2번 전처리 시작--------------------
```
# 히라가나/가타카나를 제거한 후에도 일본어 가사가 한글로 포함되어 있는 경우 전처리
df2 = df1[df1.lyrics.str.contains(r'(와타시|혼토|아노히|혼또|마센|에가이|히토츠|후타츠|마치노|몬다이|마에노|아메가)') == False]
df2= df2[df2.lyrics.str.contains(r'(히카리|미라이|오나지|춋|카라다|큥|즛또|나캇|토나리|못또|뎅와|코이|히토리|맛스구|후타리|케시키|쟈나이|잇슌|이츠모|아타라|덴샤|즈쿠|에가오|소라오|난테|고멘네|아이시테|다키시|유메|잇탄다|소레|바쇼)') == False]
df2= df2[df2.lyrics.str.contains(r'(키미니|보쿠|세카이|도코데|즛토|소바니|바쇼|레루|스베테|탓테|싯테|요쿠)') == False]
# 450곡 제거
df2.info()
```
## --------------------3번 전처리 시작--------------------
```
# 한글이 한글자라도 나오는 것만 저장합니다.
# 469곡 제거
df3 = df2[df2.lyrics.str.contains(r'[가-힣]+') == True]
df3.info()
```
## --------------------4번 전처리 시작--------------------
```
# creator칼럼을 lyricist, composer, arranger로 나누기
df4 = df3.copy()
# 리인덱스해줘야 안밀림
df4 = df4.reset_index(drop=True)
# 전처리 함수
def preprocess(text):
splitArr = list(filter(None, re.split("(작사)|(작곡)|(편곡)", text)))
lyricist = []
composer = []
arranger = []
lyricist.clear()
composer.clear()
arranger.clear()
i = 0
for i in range(0, len(splitArr)):
if splitArr[i] == "작사":
lyricist.append(splitArr[i-1].strip())
elif splitArr[i] == "작곡":
composer.append(splitArr[i-1].strip())
elif splitArr[i] == "편곡":
arranger.append(splitArr[i-1].strip())
i = i + 1
result = [', '.join(lyricist), ', '.join(composer), ', '.join(arranger)]
return result
# 행마다 작사/작곡/편곡가 전처리 결과 보기
preprocess(df4.creator[0])
# song 데이터프레임 전처리함수 이용하여 전처리 후 dataframe 추가로 만들기
i = 0
lyricist = []
composer = []
arranger = []
lyricist.clear()
composer.clear()
arranger.clear()
for i in range(0, len(df4)):
try:
lyricist.append(str(preprocess(df4.creator[i])[0]))
composer.append(str(preprocess(df4.creator[i])[1]))
arranger.append(str(preprocess(df4.creator[i])[2]))
except:
lyricist.append('')
composer.append('')
arranger.append('')
preprocessing_result = pd.DataFrame({"lyricist" : lyricist, "composer" : composer, "arranger" : arranger})
# 인덱스 3 은 리믹스라 제거되어서 안보임
preprocessing_result.head()
# 두 개의 데이터프레임 길이가 같은지 확인
len(df4) == len(preprocessing_result)
# 두 개의 데이터프레임 합치기
df4 = pd.concat([df4, preprocessing_result], axis=1)
df4
# 여기서 df4.to_csv('tidydata/tidy01.csv', index=False) 로 중간 저장
df4.to_csv('tidy03.csv', index=False)
```
## --------------------5번 전처리 시작--------------------
```
df5 = df4.copy()
# 발매일이 널값인 곡이 남지 않도록 널값인 것들은 미래의 날짜로 채워준다.
d = {'':'2019.01.01','-':'2019.01.01'}
df5['release_date'] = df5['release_date'].replace(d)
# 확인해보니 크롤링시에 발매일순서대로 담기지 않았음
# !!!!!!sort by 'artist', 'release_date'로 주고 중복제거하기
df5 = df5.sort_values(by=['artist', 'release_date'])
# 중복노래(띄어쓰기,대소문자,피처링에 의한)를 제거
# 제목의 공백(띄어쓰기)를 모두 제거한다
df5['title'] = df5['title'].str.replace(r' ', '')
# 제목의 영어 부분을 전부 소문자로 바꿔준다
df5['title'] = df5['title'].str.lower()
# 그리고 다시 중복값을 제거해준다.
# !!!!!!가장 오래된노래가 위로 올라오므로 keep='first'로 주기
df5 = df5.drop_duplicates(['artist', 'title'], keep='first')
# 중복 값을 찍어보니 잘 지워졌다! (띄어쓰기 제거 테스트)
df5[df5['title'] == '결혼 하지마']
df5[df5['title'] == '결혼하지마']
# 중복 값을 찍어보니 잘 지워졌다! (영어 대->소문자 변환 테스트)
df5[df5['title'] == '어이\(UH-EE\)']
df5[df5['title'].str.contains('어이\(uh-ee\)')]
# 제목 열을 새로 만들어서
df5['t'] = df5['title']
# 괄호 안의 부분을 없앤다.
df5.t = df5.t.str.replace(r'\(.*?\)','')
# 새로 만든 열의 중복값을 제거한다.
df5 = df5.drop_duplicates(['artist', 't'], keep='first')
# 새로 만든 열을 다시 지워준다.
df5 = df5.drop('t', axis = 1)
# 중복 값을 찍어보니 잘 지워졌다! (하나만 남음) (피처링 다른 버전 제거 테스트)
df5[df5['title'].str.contains('highwaystar')]
df5.info()
df5[df5['title'] == '해석남녀']
# 남아있는 2019년 곡은 오투알 뿐
df5[df5['release_date'] == '2019.01.01']
d = {'2019.01.01':'2002.07.19'}
df5['release_date'] = df5['release_date'].replace(d)
# 여기서 df5.to_csv('tidydata/song_tidy01.csv', index=False) 로 저장
df5.to_csv('song_tidy03.csv', index=False)
```
## 가사 데이터 전처리 끝
------------------------------
## 작사작곡 데이터 전처리 시작--------------------6번 전처리 시작--------------------
```
# 위의 df4를 파일로 불러오기 df6 = pd.read_csv('tidydata/tidy01.csv')
df6 = pd.read_csv('tidy03.csv')
df6
# 확인해보니 크롤링시에 발매일순서대로 담기지 않았음
# !!!!!!sort by 'artist', 'release_date'로 주고 시작하기
df6 = df6.sort_values(by=['artist', 'release_date'])
df6
# creator가 없는 행 제거
df6 = df6[pd.notnull(df6['creator'])]
df6
```
## --------------------7번 전처리 시작--------------------
```
# 중복노래(띄어쓰기,대소문자,피처링에 의한)를 제거 << 위 과정 반복
df7 = df6.copy()
# 제목의 공백(띄어쓰기)를 모두 제거한다
df7['title'] = df7['title'].str.replace(r' ', '')
# 제목의 영어 부분을 전부 소문자로 바꿔준다
df7['title'] = df7['title'].str.lower()
# 그리고 다시 중복값을 제거해준다.
df7 = df7.drop_duplicates(['artist', 'title'], keep='first')
# 제목 열을 새로 만들어서
df7['t'] = df7['title']
# 괄호 안의 부분을 없앤다.
df7.t = df7.t.str.replace(r'\(.*?\)','')
# 새로 만든 열의 중복값을 제거한다.
df7 = df7.drop_duplicates(['artist', 't'], keep='first')
# 새로 만든 열을 다시 지워준다.
df7 = df7.drop('t', axis = 1)
# 중복 값을 찍어보니 잘 지워졌다! (하나만 남음) (피처링 다른 버전 제거 테스트)
df7[df7['title'].str.contains('highwaystar')]
# 가사데이터와 다르게 전처리 되었음 확인
df7.info()
# 여기서 df7.to_csv('tidydata/lyricist_tidy01.csv', index=False) 로 저장
df7.to_csv('lyricist_tidy03.csv', index=False)
```
|
github_jupyter
|
<center>
<img src="http://i0.kym-cdn.com/photos/images/original/000/234/765/b7e.jpg" height="400" width="400">
</center>
# Первому семинару приготовиться
__Наша цель на сегодня:__
* Запустить анаконду быстрее, чем за 20 минут.
* Попробовать python на вкус
* Решить пару простых задач и залить их в Яндекс.Контест
## 0. Куда это я попал?
__Jupyter Notebook__ - это штука для интеракктивного запуска кода в браузере. Много где используют. Можно писать код, выполнять его и смотреть на результат.
Напиши в ячейке ниже `2 + 2` и нажми на кнопки __Shift__ и __Enter__ одновременно. Твой код выполнится и ты увидишь ответ. Дальше так и будем писать код.
```
# напиши код прямо тут вместо трёх точек
2+2
```
> Ещё у ячеек бывает разный тип. В этой части семинара ваш семинарист немного поучит вас работать в тетрадках с Markdown.
### Markdown
- [10-минутный урок по синтаксису](https://www.markdowntutorial.com/)
- [Короткий гайд по синтаксису](https://guides.github.com/features/mastering-markdown/)
## 1. Python как калькулятор
Можно складывать, умножать, делить и так далее...
```
2+2
4 * 7
3 * (2 + 5)
5 ** 1
5 / 2
5 // 2 #Человек против машины: раунд 1, угадайте, что получится?
5 % 2 #а тут?
```
Как обстоят дела с другими операциями? Попробуем извлечь квадратный корень:
```
1+1
sqrt(4)
```
Извлечение квадратного корня не входит в комплект математических операций, доступных в Python по умолчанию, поэтому вместо ответа мы получили какую-то непонятную ругань.
Эта непонятная ругань называется исключением, когда-нибудь мы научимся их обрабатывать, а сейчас обратим внимание на последнюю строчку: `NameError: name 'sqrt' is not defined` — то есть «я не понимаю, что такое sqrt». Однако, не всё так плохо: соответствующая функция есть в модуле math. Чтобы ей воспользоваться, нужно импортировать этот модуль. Это можно сделать разными способами.
```
import math
math.sqrt(4)
```
После того, как модуль `math` импортирован, вы можете узнать, какие ещё в нём есть функции. В __IPython Notebook__ для этого достаточно ввести имя модуля, поставить точку и нажать кнопку __«Tab»__. Вот, например, синус:
```
math.sin(0)
```
Приведенный синтаксис может оказаться неудобным, если вам часто приходится вызывать какие-то математические функции. Чтобы не писать каждый раз слово «math», можно импортировать из модуля конкретные функции.
```
from math import sqrt
sqrt(4)
```
Также можно подгрузить какой-нибудь модуль или пакет, но при этом изменить у него название на более короткое и пользоваться им.
```
import math as mh
mh.sqrt(4)
```
## 2. Переменные
Понятие «переменной» в программировании похоже на аналогичное понятие в математике. Переменная — это ячейка памяти, обозначаемая каким-то именем. В этой ячейке могут храниться числа, строки и более сложные объекты. Мы пока поработаем немножко с числовыми переменными.
```
x = 4
y=1
y
x
x = x + 2
x
x
```
А что будет в $x$, если запусстить ячейку ещё раз?
## 3. Типы
Попробуем записать числа по-разному
```
4 * 42
'4' * 42 #Человек против машины: раунд 2, угадайте, что получится?
```
Для каждого типа арифметика работает по-своему!
```
a = 'ёж'
b = 'ик'
a + b + a +a
type(4)
c = 3
type (c)
type('4')
type(4.0)
type(True)
```
- `str` - текстовый
- `int` - целочисленный
- `float` - число с плавающей запятой (обычное действительное число)
- `bool` - булева переменная
Иногда можно переходить от одного типа переменной к другому.
```
x = '42'
print(type(x))
x = int(x)
print(type(x))
```
А иногда нет. Включайте логику :)
```
x = 'Люк, я твой отец'
print(type(x))
x = int(x)
print(type(x))
```
Булевы переменные возникают при разных сравнениях, их мы будем активно использовать на следующем семинаре.
```
2 + 2 == 4
2 + 2 == 5
x = 5
x < 8
```
## 4. Вещественные числа и погрешности
Вещественные числа в программировании не так просты. Вот, например, посчитаем синус числа $\pi$:
```
from math import pi, sin
sin(pi) #думаете, получится 0? Ха-ха!
```
Непонятный ответ? Во-первых, это так называемая [компьютерная форма экспоненциальной записи чисел.](https://ru.wikipedia.org/wiki/Экспоненциальная_запись#.D0.9A.D0.BE.D0.BC.D0.BF.D1.8C.D1.8E.D1.82.D0.B5.D1.80.D0.BD.D1.8B.D0.B9_.D1.81.D0.BF.D0.BE.D1.81.D0.BE.D0.B1_.D1.8D.D0.BA.D1.81.D0.BF.D0.BE.D0.BD.D0.B5.D0.BD.D1.86.D0.B8.D0.B0.D0.BB.D1.8C.D0.BD.D0.BE.D0.B9_.D0.B7.D0.B0.D0.BF.D0.B8.D1.81.D0.B8) Она удобна, если нужно уметь записывать очень большие или очень маленькие числа:`1.2E2` означает `1.2⋅102`, то есть `1200`, а `2.4e-3` — то же самое, что `2.4⋅10−3=00024`.
Результат, посчитанный Python для $\sin \pi$, имеет порядок `10−16` — это очень маленькое число, близкое к нулю. Почему не «настоящий» ноль? Все вычисления в вещественных числах делаются компьютером с некоторой ограниченной точностью, поэтому зачастую вместо «честных» ответов получаются такие приближенные. К этому надо быть готовым.
```
#Человек против машины: раунд 3, угадайте, что получится?
0.4 - 0.3 == 0.1
0.4 - 0.3
```
Когда сравниваете вещественные числа будьте осторожнее.
## 5. Ввод и вывод
Работа в Jupyter редко требует писать код, который сам по себе запрашивает данные с клавиатуры, но для других приложений (и в частности для домашних работ) это может потребоваться. К тому же, написание интерактивных приложений само по себе забавное занятие. Напишем, например, программу, которая здоровается с нами по имени.
```
name = input("Введите ваше имя: ")
print("Привет,",name)
name
```
Что здесь произшло? В первой строчке мы использовали функцию `input`. Она вывела на экран строчку, которую ей передали (обязательно в кавычках) и запросила ответ с клавиатуры. Я его ввёл, указав своё имя. После чего `input` вернула строчку с именем и присвоила её переменной `name`.
После этого во второй строке была вызвана функция `print` и ей были переданы две строчки — "Привет," и то, что хранилось в переменной `name` Функция `print` вывела эти две строчки последовательно, разделив пробелом. Заметим, что в переменной `name` по-прежнему лежит та строчка, которую мы ввели с клавиатуры.
Попробуем теперь написать программу «удвоитель». Она должна будет принимать на вход число, удваивать его и возвращать результат.
```
x = input("Введите какое-нибудь число: ")
y = x * 2
print(y)
```
Что-то пошло не так. Что именно? Как это исправить?
## 6. Учимся дружить с поисковиками
__Задача:__ я хочу сгенерировать рандомное число, но я не знаю как это сделать.
В этом месте ваш семинарист совершит смертельный номер. Он загуглит у вас на глазах как сгенерировать случайное число и найдёт код для этого.
```
# Местечко для маленького чуда
```
Увидели чудо? Давайте договоримся, что вы не будете стесняться гуглить нужные вам команды и искать ответы на свои вопросы в интернете. Если уж совсем не выходит, задавайте их в наш чат технической поддержки в Телеграм.
## 7. Контест
Яндекс.Контест - это система для автоматического тестирования кода. Вы будете сталкиваться с ней в течение всего нашего курса. Давайте попробуем поработать с ней и перенесём туда решение задачки с вводом имени.
> Задачи для первого семинара доступны тут: https://official.contest.yandex.ru/contest/24363/enter/
Всё оставшееся время мы будем решать задачи из контеста. Рекомендуемый список: B,H,O,X, любые другие :)
```
### ╰( ͡° ͜ʖ ͡° )つ▬▬ι═══════ bzzzzzzzzzz
# will the code be with you
```
## 8. Дзен Python и PEP-8
Как мы увидели выше, команда `import` позволяет подгрузить различные пакеты и модули. Один из модулей, который обязательно нужно подгрузить на первой же паре - это модуль `this`
```
import this
```
Разработчики языка Python придерживаются определённой философии программирования, называемой «The Zen of Python» («Дзен Питона», или «Дзен Пайтона»). Выше мы вывели на экран именно её. Изучите эту философию и тоже начните её придерживаться.
Более того, мы рекомендуем вам изучить [стайлгайд по написанию кода PEP 8.](https://pythonworld.ru/osnovy/pep-8-rukovodstvo-po-napisaniyu-koda-na-python.html) В конце курса вас будет ждать домашка с код-ревью. В ней мы будем требовать, чтобы вы придерживались PEP 8. Код должен быть читаемым :)
## Ваше задание
- Дорешать взе задачи из [первого контеста.](https://official.contest.yandex.ru/contest/24363/enter/) Обратите внимание, что они полностью соотвествуют [первой неделе](https://www.coursera.org/learn/python-osnovy-programmirovaniya/home/week/1) рекомендованного вам курса с Coursera. Можно решать их в контесте, можно на курсере. Как вам будет удобнее. Постарайтесь решить хотябы половину их них.
- В качестве альтернативы мы можете попробовать порешать [похожие задачи с pythontutor](https://pythontutor.ru/lessons/inout_and_arithmetic_operations/)
Решение этих заданий нами никак не проверяется. Они нужны для практики, чтобы впоследствии вам было легче решать домашние работы и самостоятельные работы.
> Более того, можно просить разбирать на семинарах и консультациях задачи, которые вы не смогли решить либо не поняли. Об этом можно попросить своего семинариста или ассистентов.

|
github_jupyter
|
## Imports:
```
from PIL import Image, ImageOps
import cv2
from google_images_download import google_images_download
solicitor = google_images_download.googleimagesdownload()
arguments = {"keywords":"consorcio feliz", "aspect_ratio":"square", "color_type":"transparent", "limit":7,"output_directory":"images", "no_directory":True}
paths = solicitor.download(arguments)
```
## Instantiating the variables
```
imagePath = "topo.png"
dinamic_image = Image.open(imagePath)
new_width = 386
new_height = 390
```
## Processing:
```
image1 = dinamic_image.resize((new_width, new_height), Image.NEAREST) # use nearest neighbour
image2 = dinamic_image.resize((new_width, new_height), Image.BILINEAR) # linear interpolation in a 2x2 environment
image3 = dinamic_image.resize((new_width, new_height), Image.BICUBIC) # cubic spline interpolation in a 4x4 environment
image4 = dinamic_image.resize((new_width, new_height), Image.ANTIALIAS) # best down-sizing filter
```
## Save
```
ext = ".png"
image1.save("top" + ext) #Bad result for my tests But smaller size
image2.save("top2" + ext) #Averange result and size
image3.save("top3" + ext) #Best result
image4.save("top4" + ext) #Best result too
#dinamic_image.save("original" + ext) #convert original to png
#LESS CODE
imagePath = "original.png"
dinamic_image = Image.open(imagePath)
new_width = 100
new_height = 100
image4 = dinamic_image.resize((new_width, new_height), Image.ANTIALIAS)
image4.save(imagePath) #Best result too
#USE TO RETANGULAR FROM SQUARE IMAGE
from PIL import Image, ImageOps
import cv2
desired_size = 333
im_pth = "path/img.png"
im = Image.open(im_pth)
print(im.mode)
old_size = im.size
ratio = float(desired_size)/max(old_size)
new_size = tuple([int(x) for x in old_size])
im = im.resize(new_size, Image.ANTIALIAS)
new_im = Image.new("RGB", (desired_size, desired_size))
new_im.paste(im, ((desired_size-new_size[0])//2,
(desired_size-new_size[1])//2))
delta_w = desired_size - new_size[0]
delta_h = desired_size - new_size[1]
padding = (delta_w//2, delta_h//2, delta_w-(delta_w//2), delta_h-(delta_h//2))
print(padding)
new_im = ImageOps.expand(im, padding, fill=000000)
im_final = new_im.resize((new_width, new_height), Image.BICUBIC)
im_final.save(im_pth)
```
## transform white in transparent
```
def white_cleaner():
from PIL import Image
img = Image.open('img.png')
img = img.convert("RGBA")
pixdata = img.load()
width, height = image.size
for y in xrange(height):
for x in xrange(width):
if pixdata[x, y] == (255, 255, 255, 255):
pixdata[x, y] = (255, 255, 255, 0)
img.save("img2.png", "PNG")
from PIL import Image
img = Image.open('logo.jpeg')
img = img.convert("RGBA")
pixdata = img.load()
width, height = img.size
for y in range(height):
for x in range(width):
if pixdata[x, y] == (255, 255, 255, 255):
pixdata[x, y] = (255, 255, 255, 0)
img.save("logo.png")
```
|
github_jupyter
|
```
# This handy piece of code changes Jupyter Notebooks margins to fit your screen.
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
```
## Be sure you've installed the praw and tqdm libraries. If you haven't you can run the line below. Node.js in required to install the jupyter widgets in a few cells. These two cells can take a while to run and won't show progress; you can also run the commands in the command prompt (without the !) to see the progress as it installs.
If conda is taking a long time, you might try the mamba installer: https://github.com/TheSnakePit/mamba
`conda install -c conda-forge mamba -y`
Then installing packages with mamba should be done from the command line (console or terminal).
```
!conda install tqdm praw nodejs -y
```
Install the jupyter widget to enable tqdm to work with jupyter lab:
```
!jupyter labextension install @jupyter-widgets/jupyterlab-manager
```
# Scrape Reddit Comments for a Sentiment Analysis - Assignment
### Go through the notebook and complete the code where prompted
##### This assignment was adapted from a number of sources including: http://www.storybench.org/how-to-scrape-reddit-with-python/ and https://towardsdatascience.com/scraping-reddit-data-1c0af3040768
```
# Import all the necessary libraries
import praw # Import the Praw library: https://praw.readthedocs.io/en/latest/code_overview/reddit_instance.html
import pandas as pd # Import Pandas library: https://pandas.pydata.org/
import datetime as dt # Import datetime library
import matplotlib.pyplot as plt # Import Matplot lib for plotting
from tqdm.notebook import tqdm # progress bar used in loops
import credentials as cred # make sure to enter your API credentials in the credentials.py file
```
# Prompt
### In the cell below, enter your client ID, client secret, user agent, username, and password in the appropitate place withing the quotation marks
```
reddit = praw.Reddit(client_id = 'A0HZsCKBk-P50g',
client_secret = '9vAJO7D2uhGh-I7wpVNDI2JEwm0',
username = 'baymsds',
password = '2003@@PUujin',
user_agent = 'msds')
```
# Prompt
## In the cell below, enter a subreddit you which to compare the sentiment of the post comments, decide how far back to pull posts, and how many posts to pull comments from.
## We will be comparing two subreddits, so think of a subject where a comparison might be interesting (e.g. if there are two sides to an issue which may show up in the sentiment analysis as positive and negative scores).
```
number_of_posts = 200
time_period = 'all' # use posts from all time
# .top() can use the time_period argument
# subreddit = reddit.subreddit('').top(time_filter=time_period, limit=number_of_posts)
subreddit = reddit.subreddit('GlobalWarming').hot(limit=number_of_posts)
# Create an empty list to store the data
subreddit_comments = []
# go through each post in our subreddit and put the comment body and id in our dictionary
# the value for 'total' here needs to match 'limit' in reddit.subreddit().top()
for post in tqdm(subreddit, total=number_of_posts):
submission = reddit.submission(id=post)
submission.comments.replace_more(limit=0) # This line of code expands the comments if “load more comments” and “continue this thread” links are encountered
for top_level_comment in submission.comments:
subreddit_comments.append(top_level_comment.body) # add the comment to our list of comments
# View the comments.
print(subreddit_comments)
# Store comments in a DataFrame using a dictionary as our input
# This sets the column name as the key of the dictionary, and the list of values as the values in the DataFrame
subreddit_comments_df = pd.DataFrame(data={'comment': subreddit_comments})
subreddit_comments_df
# This is an example of how we split up the comments into individual words.
# This technique will be used again to get the scores of each individual word.
for comment in subreddit_comments_df['comment']: # loop over each word
comment_words = comment.split() # split comments into individual words
for word in comment_words: # loop over idndividual words in each comment
word = word.strip('?:!.,;"!@()#-') # remove extraneous characters
word = word.replace("\n", "") # remove end of line
print(word)
break # end the loop after one comment
```
### Now we will use the sentiment file called AFINN-en-165.txt. This file contains a sentiment score for 3382 words. More information can be found here: https://github.com/fnielsen/afinn With the sentiment file we will assign scores to words within the top comments that are found in the AFINN file
```
# We load the AFINN sentiment table into a Python dictionary
sentimentfile = open("AFINN-en-165.txt", "r") # open sentiment file
scores = {} # an empty dictionary
for line in sentimentfile: # loop over each word / sentiment score
word, score = line.split("\t") # file is tab-delimited
scores[word] = int(score) # convert the scores to intergers
sentimentfile.close()
# print out the first 10 entries of the dictionary
counter = 0
for key, value in scores.items():
print(key, ':', value)
counter += 1
if counter >= 10:
break
# we create a dictionary for storing overall counts of sentiment values
sentiments = {"-5": 0, "-4": 0, "-3": 0, "-2": 0, "-1": 0, "0": 0, "1": 0, "2": 0, "3": 0, "4": 0, "5": 0}
for word in subreddit_comments_df['comment']: # loop over each word
comment_words = word.split() # split comments into individual words
for word in comment_words: # loop over individual words in each comment
word = word.strip('?:!.,;"!@()#-') # remove extraneous characters
word = word.replace("\n", "") # remove end of line
if word in scores.keys(): # check if word is in sentiment dictionary
score = scores[word] # check if word is in sentiment dictionary
sentiments[str(score)] += 1 # add one to the sentiment score
# Print the scores
for sentiment_value in range(-5, 6):
# this uses string formatting, more on this here: https://realpython.com/python-f-strings/
print(f"{sentiment_value} sentiment:", sentiments[str(sentiment_value)])
# this would be equivalent, but obviously much less compact and elegant
# print("-5 sentiments ", sentiments["-5"])
# print("-4 sentiments ", sentiments["-4"])
# print("-3 sentiments ", sentiments["-3"])
# print("-2 sentiments ", sentiments["-2"])
# print("-1 sentiments ", sentiments["-1"])
# print(" 0 sentiments ", sentiments["0"])
# print(" 1 sentiments ", sentiments["1"])
# print(" 2 sentiments ", sentiments["2"])
# print(" 3 sentiments ", sentiments["3"])
# print(" 4 sentiments ", sentiments["4"])
# print(" 5 sentiments ", sentiments["5"])
# Now let us put the sentiment scores into a dataframe.
comment_sentiment_df = pd.DataFrame(data={'Sentiment_Value': list(sentiments.keys()), 'Counts': list(sentiments.values())})
# the 'value' column is a string; convert to integer (numeric type)
comment_sentiment_df['Sentiment_Value'] = comment_sentiment_df['Sentiment_Value'].astype('int')
# We normalize the counts so we will be able to compare between two subreddits on the same plot easily
comment_sentiment_df['Normalized_Counts'] = comment_sentiment_df['Counts'] / comment_sentiment_df['Counts'].sum() # Normalize the Count
comment_sentiment_df
```
# Prompt
## We will plot the data so it is easier to visualize.
## In each of the three cells below, plot the Count, Normalized Count, and Normalized Score vs Sentiment Value. In each plot add the appropriate x-label, y-label, plot title, and color
```
# Count vs Sentiment Value Plot
plt.bar(comment_sentiment_df['Sentiment_Value'], comment_sentiment_df['Counts'], color='green') # add the y-values and color
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Sentiment Count') # add y-label
plt.title('Reddit Global Warming Sentiment Analysis') # add title
plt.show()
comment_sentiment_df['Normalized_Counts'] = comment_sentiment_df['Counts'] / comment_sentiment_df['Counts'].sum() # Normalize the Count
comment_sentiment_df
# Count vs Sentiment Value Plot
plt.bar(comment_sentiment_df['Sentiment_Value'], comment_sentiment_df['Normalized_Counts'], color='gray') # add the y-values and color
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Normalized Counts') # add y-label
plt.title('Normalized Counts vs Sentiment Value Plot') # add title
plt.show()
```
# Prompt
### In the cell below, enter a subreddit you which to compare the sentiment of the post comments, decide how far back to pull posts, and how many posts to pull comments from.
Pick a subreddit that can be compared with your first subreddit in terms of sentiment. You may want to go back up to the first subreddit section and change some parameters. For example, do you want to find top posts, or hot posts? From what time period? How many posts? If you change these settings above (the `number_of_posts` and `time_period` variables) you should re-run the notebook from the beginning.
The following code is the same as we did for our first subreddit, just condensed into one code cell.
```
subreddit_2 = reddit.subreddit('Futurology').hot(limit=number_of_posts)
# Create an empty list to store the data
subreddit_comments_2 = []
# go through each post in our subreddit and put the comment body and id in our dictionary
for post in tqdm(subreddit_2, total=number_of_posts):
submission = reddit.submission(id=post)
submission.comments.replace_more(limit=0) # This line of code expands the comments if “load more comments” and “continue this thread” links are encountered
for top_level_comment in submission.comments:
subreddit_comments_2.append(top_level_comment.body) # add the comment to our list of comments
# Store comments in a DataFrame using a dictionary as our input
# This sets the column name as the key of the dictionary, and the list of values as the values in the DataFrame
subreddit_comments_df_2 = pd.DataFrame(data={'comment': subreddit_comments_2})
# we create a dictionary for storing overall counts of sentiment values
sentiments_2 = {"-5": 0, "-4": 0, "-3": 0, "-2": 0, "-1": 0, "0": 0, "1": 0, "2": 0, "3": 0, "4": 0, "5": 0}
for comment in subreddit_comments_df_2['comment']: # loop over each comment
comment_words = comment.split() # split comments into individual words
for word in comment_words: # loop over individual words in each comment
word = word.strip('?:!.,;"!@()#-') # remove extraneous characters
word = word.replace("\n", "") # remove end of line
if word in scores.keys(): # check if word is in sentiment dictionary
score = scores[word] # check if word is in sentiment dictionary
sentiments_2[str(score)] += 1 # add one to the sentiment score
# Now let us put the sentiment scores into a dataframe.
comment_sentiment_df_2 = pd.DataFrame(data={'Sentiment_Value': list(sentiments_2.keys()), 'Counts': list(sentiments_2.values())})
# the 'value' column is a string; convert to integer (numeric type)
comment_sentiment_df_2['Sentiment_Value'] = comment_sentiment_df_2['Sentiment_Value'].astype('int')
# We normalize the counts so we will be able to compare between two subreddits on the same plot easily
comment_sentiment_df_2['Normalized_Counts'] = comment_sentiment_df_2['Counts'] / comment_sentiment_df_2['Counts'].sum() # Normalize the Count
comment_sentiment_df_2
```
# Prompt
## We will plot the data so it is easier to visualize.
## In each of the three cells below, plot the Count, Normalized Count, and Normalized Score data vs Sentiment Value. In each plot add the appropriate x-label, y-label, plot title , and color
```
# Count vs Sentiment Value Plot
plt.bar(comment_sentiment_df_2['Sentiment_Value'], comment_sentiment_df_2['Counts'], color='blue') # add the y-values and color
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Sentiment Counts') # add y-label
plt.title('Futurology Reddit Sentiment Value Analysis') # add title
plt.show()
# Normalized Counts vs Sentiment Value Plot
plt.bar(comment_sentiment_df_2['Sentiment_Value'], comment_sentiment_df_2['Normalized_Counts'], color='black') # add the y-values and color
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Normalized Counts') # add y-label
plt.title('Normalized Counts vs Sentiment Value Plot') # add title
plt.show()
```
# Prompt
## Now we will overlay the baseline comment sentiment and the subreddit comment sentiment to help compare.
## In each of the three cells below, overlay the plots the Count, Normalized Count, and Normalized Score data vs Sentiment Value. In each plot add the appropriate x-label, y-label, plot title, and plot color
```
# Count vs Sentiment Value Plot
plt.bar(comment_sentiment_df['Sentiment_Value'], comment_sentiment_df['Counts'], color='green', label='Global Warming') # add first subreddit data and color
# add second subreddit with a slight offset of x-axis; alpha is opacity/transparency
plt.bar(comment_sentiment_df_2['Sentiment_Value'] + 0.2, comment_sentiment_df_2['Counts'], color='brown', label='Confidence in Future', alpha=0.5) # add second subreddit and color
plt.legend() # show the legend
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Sentiment Count') # add y-label
plt.title('Count vs Sentiment Value') # add title
plt.tight_layout() # tight_layout() automatically adjusts margins to make it look nice
plt.show() # show the plot
# Normalized Count vs Sentiment Value Plot
plt.bar(comment_sentiment_df['Sentiment_Value'], comment_sentiment_df['Normalized_Counts'], color='gray', label='Global Warming') # add first subreddit data and color
ax = plt.gca() # gets current axes of the plot for adding another dataset to the plot
# add second subreddit with a slight offset of x-axis
plt.bar(comment_sentiment_df_2['Sentiment_Value'] + 0.2, comment_sentiment_df_2['Normalized_Counts'], color='blue', label='Confidence in Future', alpha=0.5) # add second subreddit and color
plt.legend() # show the legend
plt.xlabel('Sentiment Value') # add x-label
plt.ylabel('Normalized Counts') # add y-label
plt.title('Normalized Counts vs Sentiment Value') # add title
plt.tight_layout() # tight_layout() automatically adjusts margins to make it look nice
plt.show() # show the plot
```
# Stretch goal (bonus-ish)
### Although this is not formally a bonus for points, it is a learning opportinity. You are not required to complete the following part of this notebook for the assignment.
Our sentiment analysis technique above works, but has some shortcomings. The biggest shortcoming is that each word is treated individually. But what if we have a sentence with a negation? For example:
'This is not a bad thing.'
This sentence should be positive overall, but AFINN only has the word 'bad' in the dictionary, and so the sentence gets an overall negative score of -3.
The most accurate sentiment analysis methods use neural networks to capture context as well as semantics. The drawback of NNs is they are computationally expensive to train and run.
An easier method is to use a slightly-improved sentiment analysis technique, such as TextBlob or VADER (https://github.com/cjhutto/vaderSentiment) in Python. Both libraries use a hand-coded algorithm with word scores like AFINN, but also with additions like negation rules (e.g. a word after 'not' has it's score reversed).
Other sentiment analysis libraries in Python can be read about here: https://www.iflexion.com/blog/sentiment-analysis-python
### The stretch goal
The stretch goal is to use other sentiment analysis libraries on the Reddit data we collected, and compare the various approaches (AFINN word-by-word, TextBlob, and VADER) using plots and statistics. For the AFINN word-by-word approach, you will need to either sum up the sentiment scores for each comment, or average them. You might also divide them by 5 to get the values between -1 and +1.
Here is a brief example of getting scores from the 3 methods described above. We can see while the raw AFINN approach gives a score of -0.6 (if normalized), TextBlob shows 0.35 and VADER shows 0.43.
```
!conda install -c conda-forge textblob
!pip install textblob vaderSentiment
sentence = 'This is not a bad thing.'
[(word, scores[word]) for word in sentence.split() if word in scores]
from textblob import TextBlob
tb = TextBlob(sentence)
print(tb.polarity)
print(tb.sentiment_assessments)
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
analyzer.polarity_scores(sentence)
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.